Sample records for algorithmic refinement amar

  1. GRID: a high-resolution protein structure refinement algorithm.

    PubMed

    Chitsaz, Mohsen; Mayo, Stephen L

    2013-03-05

    The energy-based refinement of protein structures generated by fold prediction algorithms to atomic-level accuracy remains a major challenge in structural biology. Energy-based refinement is mainly dependent on two components: (1) sufficiently accurate force fields, and (2) efficient conformational space search algorithms. Focusing on the latter, we developed a high-resolution refinement algorithm called GRID. It takes a three-dimensional protein structure as input and, using an all-atom force field, attempts to improve the energy of the structure by systematically perturbing backbone dihedrals and side-chain rotamer conformations. We compare GRID to Backrub, a stochastic algorithm that has been shown to predict a significant fraction of the conformational changes that occur with point mutations. We applied GRID and Backrub to 10 high-resolution (≤ 2.8 Å) crystal structures from the Protein Data Bank and measured the energy improvements obtained and the computation times required to achieve them. GRID resulted in energy improvements that were significantly better than those attained by Backrub while expending about the same amount of computational resources. GRID resulted in relaxed structures that had slightly higher backbone RMSDs compared to Backrub relative to the starting crystal structures. The average RMSD was 0.25 ± 0.02 Å for GRID versus 0.14 ± 0.04 Å for Backrub. These relatively minor deviations indicate that both algorithms generate structures that retain their original topologies, as expected given the nature of the algorithms. Copyright © 2012 Wiley Periodicals, Inc.

  2. In Memoriam: Amar J.S. Klar, Ph.D. | Center for Cancer Research

    Cancer.gov

    In Memoriam: Amar J.S. Klar, Ph.D. The Center for Cancer Research mourns the recent death of colleague and friend Amar J.S. Klar, Ph.D.  Dr. Klar was a much-liked and respected member of the NCI community as part of the Gene Regulation and Chromosome Biology Laboratory since 1988.

  3. Health Information in Amharic (Amarɨñña / አማርኛ )

    MedlinePlus

    ... Section Bed Bugs Bed Bug Control in Residences - English PDF Bed Bug Control in Residences - Amarɨñña / አማርኛ ( ... Department of Agriculture Vacuuming to Capture Bed Bugs - English PDF Vacuuming to Capture Bed Bugs - Amarɨñña / አማርኛ ( ...

  4. A Novel AMARS Technique for Baseline Wander Removal Applied to Photoplethysmogram.

    PubMed

    Timimi, Ammar A K; Ali, M A Mohd; Chellappan, K

    2017-06-01

    A new digital filter, AMARS (aligning minima of alternating random signal) has been derived using trigonometry to regulate signal pulsations inline. The pulses are randomly presented in continuous signals comprising frequency band lower than the signal's mean rate. Frequency selective filters are conventionally employed to reject frequencies undesired by specific applications. However, these conventional filters only reduce the effects of the rejected range producing a signal superimposed by some baseline wander (BW). In this work, filters of different ranges and techniques were independently configured to preprocess a photoplethysmogram, an optical biosignal of blood volume dynamics, producing wave shapes with several BWs. The AMARS application effectively removed the encountered BWs to assemble similarly aligned trends. The removal implementation was found repeatable in both ear and finger photoplethysmograms, emphasizing the importance of BW removal in biosignal processing in retaining its structural, functional and physiological properties. We also believe that AMARS may be relevant to other biological and continuous signals modulated by similar types of baseline volatility.

  5. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  6. Algorithm refinement for stochastic partial differential equations: II. Correlated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2005-08-10

    We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less

  7. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  8. Using Small-Step Refinement for Algorithm Verification in Computer Science Education

    ERIC Educational Resources Information Center

    Simic, Danijela

    2015-01-01

    Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…

  9. Omnivorous Representation Might Lead to Indigestion: Commentary on Amaral and Roeper

    ERIC Educational Resources Information Center

    Slabakova, Roumyana

    2014-01-01

    This article offers commentary that the Multiple Grammar (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper (A&R) in the present issue lacks elaboration of the psychological mechanisms at work in second language acquisition. Topics discussed include optionality in a speaker's grammar and the rules of verb position in…

  10. The slip-and-slide algorithm: a refinement protocol for detector geometry

    PubMed Central

    Ginn, Helen Mary; Stuart, David Ian

    2017-01-01

    Geometry correction is traditionally plagued by mis-fitting of correlated parameters, leading to local minima which prevent further improvements. Segmented detectors pose an enhanced risk of mis-fitting: even a minor confusion of detector distance and panel separation can prevent improvement in data quality. The slip-and-slide algorithm breaks down effects of the correlated parameters and their associated target functions in a fundamental shift in the approach to the problem. Parameters are never refined against the components of the data to which they are insensitive, providing a dramatic boost in the exploitation of information from a very small number of diffraction patterns. This algorithm can be applied to exploit the adherence of the spot-finding results prior to indexing to a given lattice using unit-cell dimensions as a restraint. Alternatively, it can be applied to the predicted spot locations and the observed reflection positions after indexing from a smaller number of images. Thus, the indexing rate can be boosted by 5.8% using geometry refinement from only 125 indexed patterns or 500 unindexed patterns. In one example of cypovirus type 17 polyhedrin diffraction at the Linac Coherent Light Source, this geometry refinement reveals a detector tilt of 0.3° (resulting in a maximal Z-axis error of ∼0.5 mm from an average detector distance of ∼90 mm) whilst treating all panels independently. Re-indexing and integrating with updated detector geometry reduces systematic errors providing a boost in anomalous signal of sulfur atoms by 20%. Due to the refinement of decoupled parameters, this geometry method also reaches convergence. PMID:29091058

  11. Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes

    DOE PAGES

    Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...

    2016-08-18

    In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less

  12. Commentary to "Multiple Grammars and Second Language Representation," by Luiz Amaral and Tom Roeper

    ERIC Educational Resources Information Center

    Pérez-Leroux, Ana T.

    2014-01-01

    In this commentary, the author defends the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roepe (A&R) in the present issue. Topics discussed include second language acquisition, the concept of developmental optionality, and the idea that structural decisions involve the lexical dimension. The author states that A&R's…

  13. Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm

    NASA Astrophysics Data System (ADS)

    Hasançebi, O.; Kazemzadeh Azad, S.

    2014-01-01

    This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.

  14. The Effect of Shadow Area on Sgm Algorithm and Disparity Map Refinement from High Resolution Satellite Stereo Images

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.

    2017-09-01

    Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.

  15. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  16. Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm

    NASA Astrophysics Data System (ADS)

    Küchlin, Stephan; Jenny, Patrick

    2018-06-01

    Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.

  17. Implementation of a three-qubit refined Deutsch Jozsa algorithm using SFG quantum logic gates

    NASA Astrophysics Data System (ADS)

    DelDuce, A.; Savory, S.; Bayvel, P.

    2006-05-01

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.

  18. Application of kinematic vorticity techniques for mylonitized Rocks in Al Amar suture, eastern Arabian Shield, Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Hamimi, Z.; Kassem, O. M. K.; El-Sabrouty, M. N.

    2015-09-01

    The rotation of rigid objects within a flowing viscous medium is a function of several factors including the degree of non-coaxiality. The relationship between the orientation of such objects and their aspect ratio can be used in vorticity analyses in a variety of geological settings. Method for estimation of vorticity analysis to quantitative of kinematic vorticity number (Wm) has been applied using rotated rigid objects, such as quartz and feldspar objects. The kinematic vorticity number determined for high temperature mylonitic Abt schist in Al Amar area, extreme eastern Arabian Shield, ranges from ˜0.8 to 0.9. Obtained results from vorticity and strain analyses indicate that deformation in the area deviated from simple shear. It is concluded that nappe stacking occurred early during an earlier thrusting event, probably by brittle imbrications. Ductile strain was superimposed on the nappe structure at high-pressure as revealed by a penetrative subhorizontal foliation that is developed subparallel to tectonic contacts versus the underlying and overlying nappes. Accumulation of ductile strain during underplating was not by simple shear but involved a component of vertical shortening, which caused the subhorizontal foliation in the Al Amar area. In most cases, this foliation was formed concurrently with thrust sheets imbrications, indicating that nappe stacking was associated with vertical shortening.

  19. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that

  20. Refinements to HIRS CO2 Slicing Algorithm with Results Compared to CALIOP and MODIS

    NASA Astrophysics Data System (ADS)

    Frey, R.; Menzel, P.

    2012-12-01

    This poster reports on the refinement of a cloud top property algorithm using High-resolution Infrared Radiation Sounder (HIRS) measurements. The HIRS sensor has been flown on fifteen satellites from TIROS-N through NOAA-19 and MetOp-A forming a continuous 30 year cloud data record. Cloud Top Pressure and effective emissivity (cloud fraction multiplied by cloud emissivity) are derived using the 15 μm spectral bands in the CO2 absorption band, implementing the CO2 slicing technique which is strong for high semi-transparent clouds but weak for low clouds with little thermal contrast from clear skies. We report on algorithm adjustments suggested from MODIS cloud record validations and the inclusion of collocated AVHRR cloud fraction data from the PATMOS-x algorithm. Reprocessing results for 2008 are shown using NOAA-18 HIRS and collocated CALIOP data for validation, as well as comparisons to MODIS monthly mean values. Adjustments to the cloud algorithm include (a) using CO2 slicing for all ice and mixed phase clouds and infrared window determinations for all water clouds, (b) determining the cloud top pressure from the most opaque CO2 spectral band pair seeing the cloud, (c) reducing the cloud detection threshold for the CO2 slicing algorithm to include conditions of smaller radiance differences that are often due to thin ice clouds, and (d) identifying stratospheric clouds when an opaque band is warmer than a less opaque band.

  1. Taxonomic revision of Drymoluber Amaral, 1930 (Serpentes: Colubridae).

    PubMed

    Costa, Henrique Caldeira; Moura, Mário Ribeiro; Feio, Renato Neves

    2013-01-01

    The present study is a taxonomic revision of the genus Drymoluber Amaral, 1930, using meristic and morphometric characters, aspects of external hemipenial morphology and body coloration. Sexual dimorphism occurs in D. dichrous and D. brazili but was not detected in D. apurimacensis. Morphological variation within D. dichrous is related to geographic distance between populations. Furthermore, variation in the number of ventrals and subcaudals in D. dichrous and D. brazili follows latitudinal and longitudinal clinal patterns. Drymoluber dichrous is diagnosed by the presence of 15-15-15 smooth dorsal scale rows with two apical pits, and 157-180 ventrals and 86-110 subcaudals; it occurs along the eastern versant of the Andes, in the Amazon forest, on the Guiana Shield, in the Atlantic forest, and its transitional areas with the Caatinga and Cerrado. Drymoluber brazili has 17-17-15 smooth dorsal scale rows with two apical pits, 182-202 ventrals and 109-127 subcaudals, and ranges throughout the Caatinga, Cerrado, Atlantic forest and transitional areas between these last two domains. Drymoluber apurimacensis has 13-13-13 smooth dorsal scale rows without apical pits, 158-182 ventrals and 84-93 subcaudals, and occurs in the Apurimac Valley, south of the Apurimac and Pampas rivers in Peru.

  2. Refined genetic algorithm -- Economic dispatch example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheble, G.B.; Brittig, K.

    1995-02-01

    A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.

  3. Adaptive mesh refinement for characteristic grids

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    2011-05-01

    I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.

  4. Mesh quality control for multiply-refined tetrahedral grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1994-01-01

    A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.

  5. Refined Genetic Algorithms for Polypeptide Structure Prediction.

    DTIC Science & Technology

    1996-12-01

    16 I I I. Algorithm Analysis, Design , and Implemen tation : : : : : : : : : : : : : : : : : : : : : : : : : 18 3.1 Analysis...21 3.2 Algorithm Design and Implemen tation : : : : : : : : : : : : : : : : : : : : : : : : : 22 3.2.1...26 IV. Exp erimen t Design

  6. Deformable complex network for refining low-resolution X-ray structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu

    2015-10-27

    A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less

  7. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  8. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  9. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  10. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  11. Unstructured Euler flow solutions using hexahedral cell refinement

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1991-01-01

    An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.

  12. Clinical update on optimal prandial insulin dosing using a refined run-to-run control algorithm.

    PubMed

    Zisser, Howard; Palerm, Cesar C; Bevier, Wendy C; Doyle, Francis J; Jovanovic, Lois

    2009-05-01

    This article provides a clinical update using a novel run-to-run algorithm to optimize prandial insulin dosing based on sparse glucose measurements from the previous day's meals. The objective was to use a refined run-to-run algorithm to calculate prandial insulin-to-carbohydrate ratios (I:CHO) for meals of variable carbohydrate content in subjects with type 1 diabetes (T1DM). The open-labeled, nonrandomized study took place over a 6-week period in a nonprofit research center. Nine subjects with T1DM using continuous subcutaneous insulin infusion participated. Basal insulin rates were optimized using continuous glucose monitoring, with a target fasting blood glucose of 90 mg/dl. Subjects monitored blood glucose concentration at the beginning of the meal and at 60 and 120 minutes after the start of the meal. They were instructed to start meals with blood glucose levels between 70 and 130 mg/dl. Subjects were contacted daily to collect data for the previous 24-hour period and to give them the physician-approved, algorithm-derived I:CHO ratios for the next 24 hours. Subjects calculated the amount of the insulin bolus for each meal based on the corresponding I:CHO and their estimate of the meal's carbohydrate content. One- and 2-hour postprandial glucose concentrations served as the main outcome measures. The mean 1-hour postprandial blood glucose level was 104 +/- 19 mg/dl. The 2-hour postprandial levels (96.5 +/- 18 mg/dl) approached the preprandial levels (90.1 +/- 13 mg/dl). Run-to-run algorithms are able to improve postprandial blood glucose levels in subjects with T1DM. 2009 Diabetes Technology Society.

  13. Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries

    NASA Technical Reports Server (NTRS)

    Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.

    2005-01-01

    We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.

  14. A Fast Superpixel Segmentation Algorithm for PolSAR Images Based on Edge Refinement and Revised Wishart Distance

    PubMed Central

    Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng

    2016-01-01

    The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385

  15. Refinement of the CALIOP cloud mask algorithm

    NASA Astrophysics Data System (ADS)

    Katagiri, Shuichiro; Sato, Kaori; Ohta, Kohei; Okamoto, Hajime

    2018-04-01

    A modified cloud mask algorithm was applied to the CALIOP data to have more ability to detect the clouds in the lower atmosphere. In this algorithm, we also adopt the fully attenuation discrimination and the remain noise estimation using the data obtained at an altitude of 40 km to avoid contamination of stratospheric aerosols. The new cloud mask shows an increase in the lower cloud fraction. Comparison of the results to the data observed with a PML ground observation was also made.

  16. Using Induction to Refine Information Retrieval Strategies

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Pell, Barney; Kedar, Smadar

    1994-01-01

    Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.

  17. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  18. Orthogonal polynomials for refinable linear functionals

    NASA Astrophysics Data System (ADS)

    Laurie, Dirk; de Villiers, Johan

    2006-12-01

    A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.

  19. Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garaizar, F.X.; Trangenstein, J.

    1998-09-01

    In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less

  20. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  1. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  2. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional

  3. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  4. Output-only modal dynamic identification of frames by a refined FDD algorithm at seismic input and high damping

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio

    2016-02-01

    The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.

  5. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics.

    PubMed

    Duconge, Jorge; Ramos, Alga S; Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y; Cadilla, Carmen L; Cruz, Iadelisse; Feliu, Juan F; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Results supported our rationale to incorporate individual's genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. ClinicalTrials.gov NCT01318057.

  6. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  7. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  8. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  9. A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics

    PubMed Central

    Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto

    2016-01-01

    Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506

  10. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard.

    PubMed

    Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.

  11. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard

    PubMed Central

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468

  12. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  13. Three-dimensional unstructured grid refinement and optimization using edge-swapping

    NASA Technical Reports Server (NTRS)

    Gandhi, Amar; Barth, Timothy

    1993-01-01

    This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.

  14. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  15. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  16. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  17. The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.

    PubMed

    Mitchell, William F

    1998-01-01

    Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.

  18. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  19. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  20. The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations

    PubMed Central

    Mitchell, William F.

    1998-01-01

    Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355

  1. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  2. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  3. PyCPR - a python-based implementation of the Conjugate Peak Refinement (CPR) algorithm for finding transition state structures.

    PubMed

    Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias

    2016-10-01

    Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.

  4. Some observations on mesh refinement schemes applied to shock wave phenomena

    NASA Technical Reports Server (NTRS)

    Quirk, James J.

    1995-01-01

    This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.

  5. Vanishing Point Extraction and Refinement for Robust Camera Calibration

    PubMed Central

    Tsai, Fuan

    2017-01-01

    This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966

  6. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  7. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomov, I; Pember, R; Greenough, J

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less

  8. Interactive visual exploration and refinement of cluster assignments.

    PubMed

    Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R

    2017-09-12

    With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.

  9. Mesh Generation via Local Bisection Refinement of Triangulated Grids

    DTIC Science & Technology

    2015-06-01

    Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented

  10. Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora

    NASA Astrophysics Data System (ADS)

    Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke

    The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.

  11. Experimental determination of spin-dependent electron density by joint refinement of X-ray and polarized neutron diffraction data.

    PubMed

    Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed

    2012-11-01

    New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.

  12. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  13. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  14. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  15. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  16. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  17. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gunney, Brian T.N.; Anderson, Robert W.

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  18. Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method

    PubMed Central

    Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan

    2012-01-01

    The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550

  19. Agent based simulations in disease modeling Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Pappalardo, Francesco; Pennisi, Marzio

    2016-07-01

    Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.

  20. Refinement Types ML

    DTIC Science & Technology

    1994-03-16

    105 2.10 Decidability ........ ................................ 116 3 Declaring Refinements of Recursive Data Types 165 3.1...However, when we introduce polymorphic constructors in Chapter 5, tuples will become a polymorphic data type very similar to other polymorphic data types...terminate. 0 Chapter 3 Declaring Refinements of Recursive Data Types 3.1 Introduction The previous chapter defined refinement type inference in terms of

  1. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  2. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03

  3. The ranking algorithm of the Coach browser for the UMLS metathesaurus.

    PubMed Central

    Harbourt, A. M.; Syed, E. J.; Hole, W. T.; Kingsland, L. C.

    1993-01-01

    This paper presents the novel ranking algorithm of the Coach Metathesaurus browser which is a major module of the Coach expert search refinement program. An example shows how the ranking algorithm can assist in creating a list of candidate terms useful in augmenting a suboptimal Grateful Med search of MEDLINE. PMID:8130570

  4. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner status must be submitted to EPA by December 31, 2007. (b) For U.S. Postal delivery, applications... small refiner status application must contain the following information for the company seeking small...

  5. An efficient Adaptive Mesh Refinement (AMR) algorithm for the Discontinuous Galerkin method: Applications for the computation of compressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky

    2018-06-01

    We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian

  6. Fast-kick-off monotonically convergent algorithm for searching optimal control fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel

    2011-09-15

    This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less

  7. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  8. A Message Passing Approach to Side Chain Positioning with Applications in Protein Docking Refinement *

    PubMed Central

    Moghadasi, Mohammad; Kozakov, Dima; Mamonov, Artem B.; Vakili, Pirooz; Vajda, Sandor; Paschalidis, Ioannis Ch.

    2013-01-01

    We introduce a message-passing algorithm to solve the Side Chain Positioning (SCP) problem. SCP is a crucial component of protein docking refinement, which is a key step of an important class of problems in computational structural biology called protein docking. We model SCP as a combinatorial optimization problem and formulate it as a Maximum Weighted Independent Set (MWIS) problem. We then employ a modified and convergent belief-propagation algorithm to solve a relaxation of MWIS and develop randomized estimation heuristics that use the relaxed solution to obtain an effective MWIS feasible solution. Using a benchmark set of protein complexes we demonstrate that our approach leads to more accurate docking predictions compared to a baseline algorithm that does not solve the SCP. PMID:23515575

  9. Computer simulation of refining process of a high consistency disc refiner based on CFD

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Jianwei; Wang, Jiahui

    2017-08-01

    In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.

  10. Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement

    NASA Astrophysics Data System (ADS)

    Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano

    2018-03-01

    The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.

  11. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  12. A density based algorithm to detect cavities and holes from planar points

    NASA Astrophysics Data System (ADS)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  13. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    NASA Astrophysics Data System (ADS)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  14. Road extraction from aerial images using a region competition algorithm.

    PubMed

    Amo, Miriam; Martínez, Fernando; Torre, Margarita

    2006-05-01

    In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.

  15. i3Drefine software for protein 3D structure refinement and its assessment in CASP10.

    PubMed

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.

  16. i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10

    PubMed Central

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517

  17. Whole grains, refined grains and fortified refined grains: What's the difference?

    PubMed

    Slavin, J L

    2000-09-01

    Dietary guidance universally supports the importance of grains in the diet. The United States Department of Agriculture pyramid suggests that Americans consume from six to 11 servings of grains per day, with three of these servings being whole grain products. Whole grain contains the bran, germ and endosperm, while refined grain includes only endosperm. Both refined and whole grains can be fortified with nutrients to improve the nutrient profile of the product. Most grains consumed in developed countries are subjected to some type of processing to optimize flavor and provide shelf-stable products. Grains provide important sources of dietary fibre, plant protein, phytochemicals and needed vitamins and minerals. Additionally, in the United States grains have been chosen as the best vehicle to fortify our diets with vitamins and minerals that are typically in short supply. These nutrients include iron, thiamin, niacin, riboflavin and, more recently, folic acid and calcium. Grains contain antioxidants, including vitamins, trace minerals and non-nutrients such as phenolic acids, lignans and phytic acid, which are thought to protect against cardiovascular disease and cancer. Additionally, grains are our most dependable source of phytoestrogens, plant compounds known to protect against cancers such as breast and prostate. Grains are rich sources of oligosaccharides and resistant starch, carbohydrates that function like dietary fibre and enhance the intestinal environment and help improve immune function. Epidemiological studies find that whole grains are more protective than refined grains in the prevention of chronic disease, although instruments to define intake of refined, whole and fortified grains are limited. Nutritional guidance should support whole grain products over refined, with fortification of nutrients improving the nutrient profile of both refined and whole grain products.

  18. Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Colella, Phillip

    2007-11-01

    We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.

  19. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  20. Comparison of mathematical models of fibrosis. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by M. Ben Amar and C. Bianca

    NASA Astrophysics Data System (ADS)

    Kachapova, Farida

    2016-07-01

    Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.

  1. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-08-18

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.

  2. Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul

    2005-01-01

    An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.

  3. Solution of free-boundary problems using finite-element/Newton methods and locally refined grids - Application to analysis of solidification microstructure

    NASA Technical Reports Server (NTRS)

    Tsiveriotis, K.; Brown, R. A.

    1993-01-01

    A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.

  4. An efficient algorithm for building locally refined hp - adaptive H-PCFE: Application to uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-12-01

    Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.

  5. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  6. Two Improved Algorithms for Envelope and Wavefront Reduction

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1997-01-01

    Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.

  7. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and other Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve

    2012-01-01

    Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).

  8. Possible quantum algorithm for the Lipshitz-Sarkar-Steenrod square for Khovanov homology

    NASA Astrophysics Data System (ADS)

    Ospina, Juan

    2013-05-01

    Recently the celebrated Khovanov Homology was introduced as a target for Topological Quantum Computation given that the Khovanov Homology provides a generalization of the Jones polynomal and then it is possible to think about of a generalization of the Aharonov.-Jones-Landau algorithm. Recently, Lipshitz and Sarkar introduced a space-level refinement of Khovanov homology. which is called Khovanov Homotopy. This refinement induces a Steenrod square operation Sq2 on Khovanov homology which they describe explicitly and then some computations of Sq2 were presented. Particularly, examples of links with identical integral Khovanov homology but with distinct Khovanov homotopy types were showed. In the presente work we will introduce possible quantum algorithms for the Lipshitz- Sarkar-Steenrod square for Khovanov Homolog and their possible simulations using computer algebra.

  9. US refining margin trend: austerity continues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Should crude oil prices hold near current levels in 1988, US refining margins might improve little, if at all. If crude oil prices rise, margins could blush pink or worse. If they drop, US refiners would still probably not see much margin improvement. In fact, if crude prices fall, they could set off another free fall in products markets and threaten refiner survival. Volatility in refined products markets and low product demand growth are the underlying reasons for caution or pessimism as the new year approaches. Recent directional patterns in refining margins are scrutinized in this issue. This issue alsomore » contains the following: (1) the ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore for late November, 1987; and (2) the ED fuel price/tax series for countries of the Eastern Hemisphere, November, 1987 edition. 4 figures, 6 tables.« less

  10. A time for multi-scale modeling of anti-fibrotic therapies. Comment on "Towards a unified approach in the modeling of fibrosis: A review with research perspectives" by Martine Ben Amar and Carlo Bianca

    NASA Astrophysics Data System (ADS)

    Wu, Min

    2016-07-01

    The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.

  11. Navigation Algorithms for the SeaWiFS Mission

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)

    2002-01-01

    The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking

  12. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  13. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  14. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  15. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... application for small refiner status. EPA may accept such alternate data at its discretion. (4) For motor... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship...

  16. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1995-01-01

    During the second phase project year we have made progress in the development and refinement of surface temperature retrieval algorithms and in product generation. More specifically, we have accomplished the following: (1) acquired a new advanced very high resolution radiometer (AVHRR) data set for the Beaufort Sea area spanning an entire year; (2) acquired additional along-track scanning radiometer(ATSR) data for the Arctic and Antarctic now totalling over eight months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) developed cloud masking procedures for both AVHRR and ATSR; (6) generated a two-week bi-polar global area coverage (GAC) set of composite images from which IST is being estimated; (7) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; and (8) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and special sensor microwave imager (SSM/I).

  17. Firing of pulverized solvent refined coal

    DOEpatents

    Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.

    1986-01-01

    An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.

  18. An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids

    NASA Astrophysics Data System (ADS)

    Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun

    2017-07-01

    Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.

  19. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EPA with appropriate data to correct the record when the company submits its application for small... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner...

  20. Exploiting distant homologues for phasing through the generation of compact fragments, local fold refinement and partial solution combination.

    PubMed

    Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel

    2018-04-01

    Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution

  1. A Refined Cauchy-Schwarz Inequality

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2007-01-01

    The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.

  2. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  3. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  4. Refined geometric transition and qq-characters

    NASA Astrophysics Data System (ADS)

    Kimura, Taro; Mori, Hironori; Sugimoto, Yuji

    2018-01-01

    We show the refinement of the prescription for the geometric transition in the refined topological string theory and, as its application, discuss a possibility to describe qq-characters from the string theory point of view. Though the suggested way to operate the refined geometric transition has passed through several checks, it is additionally found in this paper that the presence of the preferred direction brings a nontrivial effect. We provide the modified formula involving this point. We then apply our prescription of the refined geometric transition to proposing the stringy description of doubly quantized Seiberg-Witten curves called qq-characters in certain cases.

  5. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Applications for motor vehicle diesel fuel small refiner status must be submitted to EPA by December 31, 2001. (ii) Applications for NRLM diesel fuel small refiner status must be submitted to EPA by December 31, 2004. (2)(i) In the case of a refiner who acquires or reactivates a refinery that was shutdown or non...

  6. Refining of metallurgical-grade silicon

    NASA Technical Reports Server (NTRS)

    Dietl, J.

    1986-01-01

    A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.

  7. Light-extraction enhancement for light-emitting diodes: a firefly-inspired structure refined by the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bay, Annick; Mayer, Alexandre

    2014-09-01

    The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.

  8. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2 Benzene...

  9. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2 Benzene...

  10. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2 Benzene...

  11. Solidification Based Grain Refinement in Steels

    DTIC Science & Technology

    2009-07-24

    pearlite (See Figure 1). No evidence of the as-cast austenite dendrite structure was observed. The gating system for this sample resides at the thermal...possible nucleating compounds. 3) Extend grain refinement theory and solidification knowledge through experimental data. 4) Determine structure ...refine the structure of a casting through heat treatment. The energy required for grain refining via thermomechanical processes or heat treatment

  12. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-small refiner that acquires one or more of a small refiner's refineries? 80.1344 Section 80.1344... available to a non-small refiner that acquires one or more of a small refiner's refineries? (a) In the case of a refiner that is not an approved small refiner under § 80.1340 and that acquires a refinery from...

  13. 40 CFR 80.555 - What provisions are available to a large refiner that acquires a small refiner or one or more of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... large refiner that acquires a small refiner or one or more of its refineries? 80.555 Section 80.555... that acquires a small refiner or one or more of its refineries? (a) In the case of a refiner without approved small refiner status who acquires a refinery from a refiner with approved status as a motor...

  14. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  15. Refining atmosphere light to improve the dark channel prior algorithm

    NASA Astrophysics Data System (ADS)

    Gan, Ling; Li, Dagang; Zhou, Can

    2017-05-01

    The defogging image gotten through dark channel prior algorithm has some shortcomings, such like color distortion, dimmer light and detail-loss near the observer. The main reasons are that the atmosphere light is estimated as one value and its change in different scene depth is not considered. So we modeled the atmosphere, one parameter of the defogging model. Firstly, we scatter the atmosphere light into equivalent point and build discrete model of the light. Secondly, we build some rough and possible models through analyzing the relationship between the atmosphere light and the medium transmission. Finally, by analyzing the results of many experiments qualitatively and quantitatively, we get the selected and optimized model. Although using this method causes the time-consuming to increase slightly, the evaluations, histogram correlation coefficient and peak signal-to-noise ratio are improved significantly and the defogging result is more conformed to human visual. And the color and the details near the observer in the defogging image are better than that achieved by the primal method.

  16. Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1

    NASA Astrophysics Data System (ADS)

    Rathna Kumar, S.

    2017-09-01

    In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.

  17. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  18. Parallel Cartesian grid refinement for 3D complex flow simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2013-11-01

    A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.

  19. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems

  20. Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E

    2009-01-01

    Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, ormore » retrieving relevant data for the user.« less

  1. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP. [71 FR 39005, July 11, 2006] ...

  2. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP. [71 FR 39005, July 11, 2006] ...

  3. Refined sugar intake in Australian children.

    PubMed

    Somerset, Shawn M

    2003-12-01

    To estimate the intake of refined sugar in Australian children and adolescents, aged 2-18 years. Foods contributing to total sugar intake were identified using data from the National Nutrition Survey 1995 (NNS95), the most recent national dietary survey of the Australian population. The top 100 foods represented means of 85% (range 79-91%) and 82% (range 78-85%) of total sugar intake for boys and girls, respectively. Using published Australian food composition data (NUTTAB95), the proportion of total sugar being refined sugar was estimated for each food. Where published food composition data were not available, calculations from ingredients and manufacturer's information were used. The NNS95 assessed the dietary intake of a random sample of the Australian population, aged 2-18 years (n=3007). Mean daily intakes of refined sugar ranged from 26.9 to 78.3 g for 2-18-year-old girls, representing 6.6-14.8% of total energy intake. Corresponding figures for boys were 27.0 to 81.6 g and 8.0-14.0%, respectively. Of the 10 highest sources of refined sugar for each age group, sweetened beverages, especially cola-type beverages, were the most prominent. Refined sugar is an important contributor to dietary energy in Australian children. Sweetened beverages such as soft drinks and cordials were substantial sources of refined sugar and represent a potential target for campaigns to reduce refined sugar intake. Better access to information on the amounts of sugar added to processed food is essential for appropriate monitoring of this important energy source.

  4. A revised partiality model and post-refinement algorithm for X-ray free-electron laser data

    DOE PAGES

    Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; ...

    2015-05-23

    Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definitionmore » of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating theR splitvalue) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases.« less

  5. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  6. AWARE - The Automated EUV Wave Analysis and REduction algorithm

    NASA Astrophysics Data System (ADS)

    Ireland, J.; Inglis; A. R.; Shih, A. Y.; Christe, S.; Mumford, S.; Hayes, L. A.; Thompson, B. J.

    2016-10-01

    Extreme ultraviolet (EUV) waves are large-scale propagating disturbances observed in the solar corona, frequently associated with coronal mass ejections and flares. Since their discovery over two hundred papers discussing their properties, causes and physics have been published. However, their fundamental nature and the physics of their interactions with other solar phenomena are still not understood. To further the understanding of EUV waves, and their relation to other solar phenomena, we have constructed the Automated Wave Analysis and REduction (AWARE) algorithm for the detection of EUV waves over the full Sun. The AWARE algorithm is based on a novel image processing approach to isolating the bright wavefront of the EUV as it propagates across the corona. AWARE detects the presence of a wavefront, and measures the distance, velocity and acceleration of that wavefront across the Sun. Results from AWARE are compared to results from other algorithms for some well known EUV wave events. Suggestions are also give for further refinements to the basic algorithm presented here.

  7. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  8. Numerical relativity simulations of neutron star merger remnants using conservative mesh refinement

    NASA Astrophysics Data System (ADS)

    Dietrich, Tim; Bernuzzi, Sebastiano; Ujevic, Maximiliano; Brügmann, Bernd

    2015-06-01

    We study equal- and unequal-mass neutron star mergers by means of new numerical relativity simulations in which the general relativistic hydrodynamics solver employs an algorithm that guarantees mass conservation across the refinement levels of the computational mesh. We consider eight binary configurations with total mass M =2.7 M⊙, mass ratios q =1 and q =1.16 , four different equations of state (EOSs) and one configuration with a stiff EOS, M =2.5 M⊙ and q =1.5 , which is one of the largest mass ratios simulated in numerical relativity to date. We focus on the postmerger dynamics and study the merger remnant, the dynamical ejecta, and the postmerger gravitational wave spectrum. Although most of the merger remnants are a hypermassive neutron star collapsing to a black hole+disk system on dynamical time scales, stiff EOSs can eventually produce a stable massive neutron star. During the merger process and on very short time scales, about ˜10-3- 10-2M⊙ of material become unbound with kinetic energies ˜1050 erg . Ejecta are mostly emitted around the orbital plane and favored by large mass ratios and softer EOS. The postmerger wave spectrum is mainly characterized by the nonaxisymmetric oscillations of the remnant neutron star. The stiff EOS configuration consisting of a 1.5 M⊙ and a 1.0 M⊙ neutron star, simulated here for the first time, shows a rather peculiar dynamics. During merger the companion star is very deformed; about ˜0.03 M⊙ of the rest mass becomes unbound from the tidal tail due to the torque generated by the two-core inner structure. The merger remnant is a stable neutron star surrounded by a massive accretion disk of rest mass ˜0.3 M⊙. This and similar configurations might be particularly interesting for electromagnetic counterparts. Comparing results obtained with and without the conservative mesh refinement algorithm, we find that postmerger simulations can be affected by systematic errors if mass conservation is not enforced in the

  9. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1994-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  10. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.

  11. Refinement Of Hexahedral Cells In Euler Flow Computations

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1996-01-01

    Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.

  12. Friends-of-friends galaxy group finder with membership refinement. Application to the local Universe

    NASA Astrophysics Data System (ADS)

    Tempel, E.; Kipper, R.; Tamm, A.; Gramann, M.; Einasto, M.; Sepp, T.; Tuvikene, T.

    2016-04-01

    Context. Groups form the most abundant class of galaxy systems. They act as the principal drivers of galaxy evolution and can be used as tracers of the large-scale structure and the underlying cosmology. However, the detection of galaxy groups from galaxy redshift survey data is hampered by several observational limitations. Aims: We improve the widely used friends-of-friends (FoF) group finding algorithm with membership refinement procedures and apply the method to a combined dataset of galaxies in the local Universe. A major aim of the refinement is to detect subgroups within the FoF groups, enabling a more reliable suppression of the fingers-of-God effect. Methods: The FoF algorithm is often suspected of leaving subsystems of groups and clusters undetected. We used a galaxy sample built of the 2MRS, CF2, and 2M++ survey data comprising nearly 80 000 galaxies within the local volume of 430 Mpc radius to detect FoF groups. We conducted a multimodality check on the detected groups in search for subgroups. We furthermore refined group membership using the group virial radius and escape velocity to expose unbound galaxies. We used the virial theorem to estimate group masses. Results: The analysis results in a catalogue of 6282 galaxy groups in the 2MRS sample with two or more members, together with their mass estimates. About half of the initial FoF groups with ten or more members were split into smaller systems with the multimodality check. An interesting comparison to our detected groups is provided by another group catalogue that is based on similar data but a completely different methodology. Two thirds of the groups are identical or very similar. Differences mostly concern the smallest and largest of these other groups, the former sometimes missing and the latter being divided into subsystems in our catalogue. The catalogues are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http

  13. Parallel-Vector Algorithm For Rapid Structural Anlysis

    NASA Technical Reports Server (NTRS)

    Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.

    1993-01-01

    New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.

  14. An adaptive mesh refinement-multiphase lattice Boltzmann flux solver for simulation of complex binary fluid flows

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Wang, Y.; Shu, C.

    2017-12-01

    This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.

  15. Evaluation of a Didactic Method for the Active Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Esteban-Sánchez, Natalia; Pizarro, Celeste; Velázquez-Iturbide, J. Ángel

    2014-01-01

    An evaluation of the educational effectiveness of a didactic method for the active learning of greedy algorithms is presented. The didactic method sets students structured-inquiry challenges to be addressed with a specific experimental method, supported by the interactive system GreedEx. This didactic method has been refined over several years of…

  16. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  17. Solving the scalability issue in quantum-based refinement: Q|R#1.

    PubMed

    Zheng, Min; Moriarty, Nigel W; Xu, Yanting; Reimers, Jeffrey R; Afonine, Pavel V; Waller, Mark P

    2017-12-01

    Accurately refining biomacromolecules using a quantum-chemical method is challenging because the cost of a quantum-chemical calculation scales approximately as n m , where n is the number of atoms and m (≥3) is based on the quantum method of choice. This fundamental problem means that quantum-chemical calculations become intractable when the size of the system requires more computational resources than are available. In the development of the software package called Q|R, this issue is referred to as Q|R#1. A divide-and-conquer approach has been developed that fragments the atomic model into small manageable pieces in order to solve Q|R#1. Firstly, the atomic model of a crystal structure is analyzed to detect noncovalent interactions between residues, and the results of the analysis are represented as an interaction graph. Secondly, a graph-clustering algorithm is used to partition the interaction graph into a set of clusters in such a way as to minimize disruption to the noncovalent interaction network. Thirdly, the environment surrounding each individual cluster is analyzed and any residue that is interacting with a particular cluster is assigned to the buffer region of that particular cluster. A fragment is defined as a cluster plus its buffer region. The gradients for all atoms from each of the fragments are computed, and only the gradients from each cluster are combined to create the total gradients. A quantum-based refinement is carried out using the total gradients as chemical restraints. In order to validate this interaction graph-based fragmentation approach in Q|R, the entire atomic model of an amyloid cross-β spine crystal structure (PDB entry 2oNA) was refined.

  18. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  19. Optimization of Refining Craft for Vegetable Insulating Oil

    NASA Astrophysics Data System (ADS)

    Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan

    2016-05-01

    Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.

  20. Impact of echinocandin on prognosis of proven invasive candidiasis in ICU: A post-hoc causal inference model using the AmarCAND2 study.

    PubMed

    Bailly, Sébastien; Leroy, Olivier; Azoulay, Elie; Montravers, Philippe; Constantin, Jean-Michel; Dupont, Hervé; Guillemot, Didier; Lortholary, Olivier; Mira, Jean-Paul; Perrigault, Pierre-François; Gangneux, Jean-Pierre; Timsit, Jean-François

    2017-04-01

    guidelines recommend first-line systemic antifungal therapy (SAT) with echinocandins in invasive candidiasis (IC), especially in critically ill patients. This study aimed at assessing the impact of echinocandins compared to azoles as initial SAT on the 28-day prognosis in adult ICU patients. From the prospective multicenter AmarCAND2 cohort (835 patients), we selected those with documented IC and treated with echinocandins (ECH) or azoles (AZO). The average causal effect of echinocandins on 28-day mortality was assessed using an inverse probability of treatment weight (IPTW) estimator. 397 patients were selected, treated with echinocandins (242 patients, 61%) or azoles (155 patients, 39%); septic shock: 179 patients (45%). The median SAPSII was higher in the ECH group (48 [35; 62] vs. 43 [31; 58], p = 0.01). Crude mortality was 34% (ECH group) vs. 25% (AZO group). After adjustment on baseline confounders, no significant association emerged between initial SAT with echinocandins and 28-day mortality (HR: 0.95; 95% CI: [0.60; 1.49]; p = 0.82). However, echinocandin tended to benefit patients with septic shock (HR: 0.46 [0.19; 1.07]; p = 0.07). Patients who received echinocandins were more severely ill. Echinocandin use was associated with a non-significant 7% decrease of 28-day mortality and a trend to a beneficial effect for patient with septic shock. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  1. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  2. On macromolecular refinement at subatomic resolution withinteratomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-09

    A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  3. A trace map comparison algorithm for the discrete fracture network models of rock masses

    NASA Astrophysics Data System (ADS)

    Han, Shuai; Wang, Gang; Li, Mingchao

    2018-06-01

    Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.

  4. Gary Refining Company emerges from Chapter 11 bankruptcy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-09-01

    On July 24, 1986 Gary Refining Company, Inc. announced that the Reorganization Plan for Gary Refining Company, Inc., Gary Refining Company, and Mesa Refining, Inc. has been approved by the United States bankruptcy Court (District of Colorado). The companies filed for protection from creditors on March 4, 1985 under Chapter 11 of the United States Bankruptcy Code. Payments to creditors are expected to begin upon start-up of the Gary Refining Company (GRC) refinery in Fruita, Colorado after delivery of shale oil from Union Oil's Parachute Creek plant. In the interim, GRC will continue to explore options for possible startup (onmore » a full scale or partial basis) prior to that time.« less

  5. Steel refining possibilities in LF

    NASA Astrophysics Data System (ADS)

    Dumitru, M. G.; Ioana, A.; Constantin, N.; Ciobanu, F.; Pollifroni, M.

    2018-01-01

    This article presents the main possibilities for steel refining in Ladle Furnace (LF). These, are presented: steelmaking stages, steel refining through argon bottom stirring, online control of the bottom stirring, bottom stirring diagram during LF treatment of a heat, porous plug influence over the argon stirring, bottom stirring porous plug, analysis of porous plugs disposal on ladle bottom surface, bottom stirring simulation with ANSYS, bottom stirring simulation with Autodesk CFD.

  6. On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference

    NASA Astrophysics Data System (ADS)

    Hu, Zixi; Yao, Zhewei; Li, Jinglai

    2017-03-01

    Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.

  7. Cortical Feedback Regulates Feedforward Retinogeniculate Refinement

    PubMed Central

    Thompson, Andrew D; Picard, Nathalie; Min, Lia; Fagiolini, Michela; Chen, Chinfei

    2016-01-01

    SUMMARY According to the prevailing view of neural development, sensory pathways develop sequentially in a feedforward manner, whereby each local microcircuit refines and stabilizes before directing the wiring of its downstream target. In the visual system, retinal circuits are thought to mature first and direct refinement in the thalamus, after which cortical circuits refine with experience-dependent plasticity. In contrast, we now show that feedback from cortex to thalamus critically regulates refinement of the retinogeniculate projection during a discrete window in development, beginning at postnatal day 20 in mice. Disrupting cortical activity during this window, pharmacologically or chemogenetically, increases the number of retinal ganglion cells innervating each thalamic relay neuron. These results suggest that primary sensory structures develop through the concurrent and interdependent remodeling of subcortical and cortical circuits in response to sensory experience, rather than through a simple feedforward process. Our findings also highlight an unexpected function for the corticothalamic projection. PMID:27545712

  8. A Partitioning Algorithm for Block-Diagonal Matrices With Overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina

    2008-02-02

    We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less

  9. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1344 What provisions are... a small refiner approved under § 80.1340, the small refiner provisions of the gasoline benzene...

  10. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1344 What provisions are... a small refiner approved under § 80.1340, the small refiner provisions of the gasoline benzene...

  11. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1344 What provisions are... a small refiner approved under § 80.1340, the small refiner provisions of the gasoline benzene...

  12. Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface

    PubMed Central

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation

  13. Lung segmentation refinement based on optimal surface finding utilizing a hybrid desktop/virtual reality user interface.

    PubMed

    Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R

    2013-01-01

    Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the

  14. Deterministic implementations of single-photon multi-qubit Deutsch-Jozsa algorithms with linear optics

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Liu, Ji-Zhen

    2017-02-01

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  15. Deterministic implementations of single-photon multi-qubit Deutsch–Jozsa algorithms with linear optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen

    2017-02-15

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  16. Joint refinement model for the spin resolved one-electron reduced density matrix of YTiO3 using magnetic structure factors and magnetic Compton profiles data.

    PubMed

    Gueddida, Saber; Yan, Zeyin; Kibalin, Iurii; Voufack, Ariste Bolivard; Claiser, Nicolas; Souhassou, Mohamed; Lecomte, Claude; Gillon, Béatrice; Gillet, Jean-Michel

    2018-04-28

    In this paper, we propose a simple cluster model with limited basis sets to reproduce the unpaired electron distributions in a YTiO 3 ferromagnetic crystal. The spin-resolved one-electron-reduced density matrix is reconstructed simultaneously from theoretical magnetic structure factors and directional magnetic Compton profiles using our joint refinement algorithm. This algorithm is guided by the rescaling of basis functions and the adjustment of the spin population matrix. The resulting spin electron density in both position and momentum spaces from the joint refinement model is in agreement with theoretical and experimental results. Benefits brought from magnetic Compton profiles to the entire spin density matrix are illustrated. We studied the magnetic properties of the YTiO 3 crystal along the Ti-O 1 -Ti bonding. We found that the basis functions are mostly rescaled by means of magnetic Compton profiles, while the molecular occupation numbers are mainly modified by the magnetic structure factors.

  17. CERA; Refiners can cope with CAA requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-02-17

    This paper reports on a study conducted for the Department of Energy which predicts initial reformulated gasoline requirements in 1995 won't pose significant technical problems for U.S. refiners. But nearly all refiners will have to make added investments. Cambridge Energy Research Associates (CERA) prepared the study for DOE on critical issues affecting refiners and U.S. product supplies in the 1990s, particularly the effects of the 1990 Clean Air Act (CAA) amendments.

  18. Bauxite Mining and Alumina Refining

    PubMed Central

    Frisch, Neale; Olney, David

    2014-01-01

    Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720

  19. Implementation of an algorithm for cylindrical object identification using range data

    NASA Technical Reports Server (NTRS)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  20. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  1. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  2. Firing of pulverized solvent refined coal

    DOEpatents

    Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.

    1990-05-15

    A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.

  3. Apparent consumption of refined sugar in Australia (1938-2011).

    PubMed

    McNeill, T J; Shrapnel, W S

    2015-11-01

    In Australia, the Australian Bureau of Statistics discontinued collection of apparent consumption data for refined sugars in 1998/1999. The objectives of this study were to update this data series to determine whether it is a reliable data series that reflects consumption of refined sugars, defined as sucrose in the forms of refined or raw sugar or liquified sugars manufactured for human consumption. The study used the same methodology as that used by the Australian Bureau of Statistics to derive a refined sugars consumption estimate each year until the collection was discontinued. Sales by Australian refiners, refined sugars imports and the net balance of refined sugars contained in foods imported into, and exported from, Australia were used to calculate total refined sugars use for each year up to 2011. Per capita consumption figures were then derived. During the period 1938-2011, apparent consumption of refined sugars in Australia fell 13.1% from 48.3 to 42.0 kg per head (R(2)=0.74). Between the 1950s and the 1970s, apparent consumption was relatively stable at about 50 kg per person. In the shorter period 1970-2011, refined sugars consumption fell 16.5% from 50.3 to 42.0 kg per head, though greater variability was evident (R(2)=0.53). An alternative data set showed greater volatility with no trend up or down. The limited variability of the extended apparent consumption series and its consistency with recent national dietary survey data and sugar-sweetened beverage sales data indicate that it is a reliable data set that reflects declining intake of refined sugars in Australia.

  4. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  5. Contextual Refinement of Regulatory Targets Reveals Effects on Breast Cancer Prognosis of the Regulome

    PubMed Central

    Andrews, Erik; Wang, Yue; Xia, Tian; Cheng, Wenqing; Cheng, Chao

    2017-01-01

    Gene expression regulators, such as transcription factors (TFs) and microRNAs (miRNAs), have varying regulatory targets based on the tissue and physiological state (context) within which they are expressed. While the emergence of regulator-characterizing experiments has inferred the target genes of many regulators across many contexts, methods for transferring regulator target genes across contexts are lacking. Further, regulator target gene lists frequently are not curated or have permissive inclusion criteria, impairing their use. Here, we present a method called iterative Contextual Transcriptional Activity Inference of Regulators (icTAIR) to resolve these issues. icTAIR takes a regulator’s previously-identified target gene list and combines it with gene expression data from a context, quantifying that regulator’s activity for that context. It then calculates the correlation between each listed target gene’s expression and the quantitative score of regulatory activity, removes the uncorrelated genes from the list, and iterates the process until it derives a stable list of refined target genes. To validate and demonstrate icTAIR’s power, we use it to refine the MSigDB c3 database of TF, miRNA and unclassified motif target gene lists for breast cancer. We then use its output for survival analysis with clinicopathological multivariable adjustment in 7 independent breast cancer datasets covering 3,430 patients. We uncover many novel prognostic regulators that were obscured prior to refinement, in particular NFY, and offer a detailed look at the composition and relationships among the breast cancer prognostic regulome. We anticipate icTAIR will be of general use in contextually refining regulator target genes for discoveries across many contexts. The icTAIR algorithm can be downloaded from https://github.com/icTAIR. PMID:28103241

  6. a Voxel-Based Filtering Algorithm for Mobile LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qin, H.; Guan, G.; Yu, Y.; Zhong, L.

    2018-04-01

    This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.

  7. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.

    2014-05-01

    Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less

  8. Verification of fluid-structure-interaction algorithms through the method of manufactured solutions for actuator-line applications

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Ganesh; Sprague, Michael

    2017-11-01

    Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  9. Reformulated Gasoline Market Affected Refiners Differently, 1995

    EIA Publications

    1996-01-01

    This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.

  10. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  11. Characterization and Evaluation of Re-Refined Engine Lubricating Oil.

    DTIC Science & Technology

    1981-12-01

    performance of re-refineod and virgin oils and to Investigate the potential esubstantlal esquivalknced of re-refined and virgin lubricating oils. The...d 20. Abstract (continued) engine deposits derived from virgin and re-refined engine oils. (2) The effects of virgin and re-refined oils on engine...blowby composition and engine deposit generation were determined using a spark ignition engine and, 3) Virgin and re-refined basestock production

  12. Molecular dynamics force-field refinement against quasi-elastic neutron scattering data

    DOE PAGES

    Borreguero Calvo, Jose M.; Lynch, Vickie E.

    2015-11-23

    Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less

  13. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  14. REFMAC5 for the refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.

    The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less

  15. Hemodynamic and oxygen transport patterns for outcome prediction, therapeutic goals, and clinical algorithms to improve outcome. Feasibility of artificial intelligence to customize algorithms.

    PubMed

    Shoemaker, W C; Patil, R; Appel, P L; Kram, H B

    1992-11-01

    A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.

  16. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. On macromolecular refinement at subatomic resolution with interatomic scatterers.

    PubMed

    Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre

    2007-11-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.

  18. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  19. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package.

    PubMed

    Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M

    2014-05-01

    Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.

  20. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  1. Multidataset Refinement Resonant Diffraction, and Magnetic Structures

    PubMed Central

    Attfield, J. Paul

    2004-01-01

    The scope of Rietveld and other powder diffraction refinements continues to expand, driven by improvements in instrumentation, methodology and software. This will be illustrated by examples from our research in recent years. Multidataset refinement is now commonplace; the datasets may be from different detectors, e.g., in a time-of-flight experiment, or from separate experiments, such as at several x-ray energies giving resonant information. The complementary use of x rays and neutrons is exemplified by a recent combined refinement of the monoclinic superstructure of magnetite, Fe3O4, below the 122 K Verwey transition, which reveals evidence for Fe2+/Fe3+ charge ordering. Powder neutron diffraction data continue to be used for the solution and Rietveld refinement of magnetic structures. Time-of-flight instruments on cold neutron sources can produce data that have a high intensity and good resolution at high d-spacings. Such profiles have been used to study incommensurate magnetic structures such as FeAsO4 and β–CrPO4. A multiphase, multidataset refinement of the phase-separated perovskite (Pr0.35Y0.07Th0.04Ca0.04Sr0.5)MnO3 has been used to fit three components with different crystal and magnetic structures at low temperatures. PMID:27366599

  2. Structure refinement of membrane proteins via molecular dynamics simulations.

    PubMed

    Dutagaci, Bercem; Heo, Lim; Feig, Michael

    2018-07-01

    A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.

  3. On macromolecular refinement at subatomic resolution with interatomic scatterers

    PubMed Central

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre

    2007-01-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035

  4. Control algorithms for aerobraking in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Ward, Donald T.; Shipley, Buford W., Jr.

    1991-01-01

    The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.

  5. Processing α-mercuric iodide by zone refining

    NASA Astrophysics Data System (ADS)

    Burger, A.; Morgan, S. H.; Henderson, D. O.; Biao, Y.; Zhang, K.; Silberman, E.; Nason, D.; van den Berg, L.; Ortale-Baccash, C.; Cross, E.

    1993-03-01

    An investigation is being conducted on zone refining α-mercuric iodide. Analytical studies using differential scanning calorimetry and anion chromatography indicate that impurities are accumulated mainly at the end where zone travel terminates. Early results indicate that single crystals can be readily grown from zone-refined material.

  6. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  7. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  8. Development and validation of an algorithm for identifying urinary retention in a cohort of patients with epilepsy in a large US administrative claims database.

    PubMed

    Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng

    2016-04-01

    The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Evaluation of the Liberian Petroleum Refining Company operations: crude oil refining vs product importation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samuels, G.; Barron, W.F.; Barnes, R.W.

    1985-02-01

    This report is one of a series of project papers providing background information for an assessment of energy options for Liberia, West Africa. It presents information on a controversial recommendation of the energy assessment - that the only refinery in the country be closed and refined products be imported for a savings of approximately $20 million per year. The report reviews refinery operations, discusses a number of related issues, and presents a detailed analysis of the economics of the refinery operations as of 1982. This analysis corroborates the initial estimate of savings to be gained from importing all refined products.more » 1 reference, 24 tables.« less

  10. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  11. Principles of minimum cost refining for optimum linerboard strength

    Treesearch

    Thomas J. Urbanik; Jong Myoung Won

    2006-01-01

    The mechanical properties of paper at a single basis weight and a single targeted refining freeness level have traditionally been used to compare papers. Understanding the economics of corrugated fiberboard requires a more global characterization of the variation of mechanical properties and refining energy consumption with freeness. The cost of refining energy to...

  12. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE PAGES

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...

    2018-06-01

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  13. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  14. An Algorithm for Converting Static Earth Sensor Measurements into Earth Observation Vectors

    NASA Technical Reports Server (NTRS)

    Harman, R.; Hashmall, Joseph A.; Sedlak, Joseph

    2004-01-01

    An algorithm has been developed that converts penetration angles reported by Static Earth Sensors (SESs) into Earth observation vectors. This algorithm allows compensation for variation in the horizon height including that caused by Earth oblateness. It also allows pitch and roll to be computed using any number (greater than 1) of simultaneous sensor penetration angles simplifying processing during periods of Sun and Moon interference. The algorithm computes body frame unit vectors through each SES cluster. It also computes GCI vectors from the spacecraft to the position on the Earth's limb where each cluster detects the Earth's limb. These body frame vectors are used as sensor observation vectors and the GCI vectors are used as reference vectors in an attitude solution. The attitude, with the unobservable yaw discarded, is iteratively refined to provide the Earth observation vector solution.

  15. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. On macromolecular refinement at subatomic resolution with interatomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-01

    Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  17. Hirshfeld atom refinement.

    PubMed

    Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-09-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.

  18. Classes of real-world 'small-world' networks: From the neural network of C. Elegans to the web of human sexual contacts

    NASA Astrophysics Data System (ADS)

    Nunes Amaral, Luis A.

    2002-03-01

    We study the statistical properties of a variety of diverse real-world networks including the neural network of C. Elegans, food webs for seven distinct environments, transportation and technological networks, and a number of distinct social networks [1-5]. We present evidence of the occurrence of three classes of small-world networks [2]: (a) scale-free networks, characterized by a vertex connectivity distribution that decays as a power law; (b) broad-scale networks, characterized by a connectivity distribution that has a power-law regime followed by a sharp cut-off; (c) single-scale networks, characterized by a connectivity distribution with a fast decaying tail. Moreover, we note for the classes of broad-scale and single-scale networks that there are constraints limiting the addition of new links. Our results suggest that the nature of such constraints may be the controlling factor for the emergence of different classes of networks. [See http://polymer.bu.edu/ amaral/Networks.html for details and htpp://polymer.bu.edu/ amaral/Professional.html for access to PDF files of articles.] 1. M. Barthélémy, L. A. N. Amaral, Phys. Rev. Lett. 82, 3180-3183 (1999). 2. L. A. N. Amaral, A. Scala, M. Barthélémy, H. E. Stanley, Proc. Nat. Acad. Sci. USA 97, 11149-11152 (2000). 3. F. Liljeros, C. R. Edling, L. A. N. Amaral, H. E. Stanley, and Y. Åberg, Nature 411, 907-908 (2001). 4. J. Camacho, R. Guimera, L.A.N. Amaral, Phys. Rev. E RC (to appear). 5. S. Mossa, M. Barthelemy, H.E. Stanley, L.A.N. Amaral (submitted).

  19. Generalization and refinement of an automatic landing system capable of curved trajectories

    NASA Technical Reports Server (NTRS)

    Sherman, W. L.

    1976-01-01

    Refinements in the lateral and longitudinal guidance for an automatic landing system capable of curved trajectories were studied. Wing flaps or drag flaps (speed brakes) were found to provide faster and more precise speed control than autothrottles. In the case of the lateral control it is shown that the use of the integral of the roll error in the roll command over the first 30 to 40 seconds of flight reduces the sensitivity of the lateral guidance to the gain on the azimuth guidance angle error in the roll command. Also, changes to the guidance algorithm are given that permit pi-radian approaches and constrain the airplane to fly in a specified plane defined by the position of the airplane at the start of letdown and the flare point.

  20. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  1. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  2. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  3. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    NASA Astrophysics Data System (ADS)

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  4. Total antioxidant content of alternatives to refined sugar.

    PubMed

    Phillips, Katherine M; Carlsen, Monica H; Blomhoff, Rune

    2009-01-01

    Oxidative damage is implicated in the etiology of cancer, cardiovascular disease, and other degenerative disorders. Recent nutritional research has focused on the antioxidant potential of foods, while current dietary recommendations are to increase the intake of antioxidant-rich foods rather than supplement specific nutrients. Many alternatives to refined sugar are available, including raw cane sugar, plant saps/syrups (eg, maple syrup, agave nectar), molasses, honey, and fruit sugars (eg, date sugar). Unrefined sweeteners were hypothesized to contain higher levels of antioxidants, similar to the contrast between whole and refined grain products. To compare the total antioxidant content of natural sweeteners as alternatives to refined sugar. The ferric-reducing ability of plasma (FRAP) assay was used to estimate total antioxidant capacity. Major brands of 12 types of sweeteners as well as refined white sugar and corn syrup were sampled from retail outlets in the United States. Substantial differences in total antioxidant content of different sweeteners were found. Refined sugar, corn syrup, and agave nectar contained minimal antioxidant activity (<0.01 mmol FRAP/100 g); raw cane sugar had a higher FRAP (0.1 mmol/100 g). Dark and blackstrap molasses had the highest FRAP (4.6 to 4.9 mmol/100 g), while maple syrup, brown sugar, and honey showed intermediate antioxidant capacity (0.2 to 0.7 mmol FRAP/100 g). Based on an average intake of 130 g/day refined sugars and the antioxidant activity measured in typical diets, substituting alternative sweeteners could increase antioxidant intake an average of 2.6 mmol/day, similar to the amount found in a serving of berries or nuts. Many readily available alternatives to refined sugar offer the potential benefit of antioxidant activity.

  5. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is

  6. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    NASA Astrophysics Data System (ADS)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number

  7. A Cartesian grid approach with hierarchical refinement for compressible flows

    NASA Technical Reports Server (NTRS)

    Quirk, James J.

    1994-01-01

    Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.

  8. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  9. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1988-08-31

    refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct

  10. Accelerating global optimization of aerodynamic shapes using a new surrogate-assisted parallel genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Mehdi; Jahangirian, Alireza

    2017-12-01

    An efficient strategy is presented for global shape optimization of wing sections with a parallel genetic algorithm. Several computational techniques are applied to increase the convergence rate and the efficiency of the method. A variable fidelity computational evaluation method is applied in which the expensive Navier-Stokes flow solver is complemented by an inexpensive multi-layer perceptron neural network for the objective function evaluations. A population dispersion method that consists of two phases, of exploration and refinement, is developed to improve the convergence rate and the robustness of the genetic algorithm. Owing to the nature of the optimization problem, a parallel framework based on the master/slave approach is used. The outcomes indicate that the method is able to find the global optimum with significantly lower computational time in comparison to the conventional genetic algorithm.

  11. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  12. K-Means Subject Matter Expert Refined Topic Model Methodology

    DTIC Science & Technology

    2017-01-01

    Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c

  13. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  14. Purification of Germanium Crystals by Zone Refining

    NASA Astrophysics Data System (ADS)

    Kooi, Kyler; Yang, Gang; Mei, Dongming

    2016-09-01

    Germanium zone refining is one of the most important techniques used to produce high purity germanium (HPGe) single crystals for the fabrication of nuclear radiation detectors. During zone refining the impurities are isolated to different parts of the ingot. In practice, the effective isolation of an impurity is dependent on many parameters, including molten zone travel speed, the ratio of ingot length to molten zone width, and number of passes. By studying the theory of these influential factors, perfecting our cleaning and preparation procedures, and analyzing the origin and distribution of our impurities (aluminum, boron, gallium, and phosphorous) identified using photothermal ionization spectroscopy (PTIS), we have optimized these parameters to produce HPGe. We have achieved a net impurity level of 1010 /cm3 for our zone-refined ingots, measured with van der Pauw and Hall-effect methods. Zone-refined ingots of this purity can be processed into a detector grade HPGe single crystal, which can be used to fabricate detectors for dark matter and neutrinoless double beta decay detection. This project was financially supported by DOE Grant (DE-FG02-10ER46709) and the State Governor's Research Center.

  15. Detection and Tracking Algorithm Refinement.

    DTIC Science & Technology

    1981-10-01

    65600 2048 131136 An additional number of unused bytes is always added to each record. This varies from 3-11 bytes. 33 /. t1&ju 3 P-,w Doppler Format...Record length 1 32 2096 2 64 4144 3 128 8240 4 256 16432 5 512 32816 6 1024 65584 7 2048 131120 36 Pdiw Doppler Forma ,t 1979, Norma.In Doppler 𔃾t’Vit...512 12336 E 1024 24624 7 2048 49200 Inte~grator/PPP records Position Con-tents- 1 158 (Bits 8, 4, 1 on) 2-46 Same as time series records 47-808

  16. Integrated process for the solvent refining of coal

    DOEpatents

    Garg, Diwakar

    1983-01-01

    A process is set forth for the integrated liquefaction of coal by the catalytic solvent refining of a feed coal in a first stage to liquid and solid products and the catalytic hydrogenation of the solid product in a second stage to produce additional liquid product. A fresh inexpensive, throw-away catalyst is utilized in the second stage hydrogenation of the solid product and this catalyst is recovered and recycled for catalyst duty in the solvent refining stage without any activation steps performed on the used catalyst prior to its use in the solvent refining of feed coal.

  17. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  18. Dilemma for high-tech refiners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The price difference between lighter and heavier crude oils, and between light and heavy refined products, amounts to the incentive for refiners to upgrade processing facilities. When that differential widens, the incentive to utilize lower price, lower quality crude is enhanced; when it narrows, the desirability of relying on light oil prices and supplies is intensified. The incentive to upgrade has been eroded ever since 1981 ushered in world-wide overproduction of crude oil. Lower demand due to recession met with increased pressure on producers to compete for market shares to maintain vital revenue levels - for private and national oilmore » companies alike. Light crude prices suffered, while heavy crude prices improved. As of mid-1984, the shrinkage of the price differential went into dormancy (see Energy Detente 8/8/84, A Hey-Day for Heavy Crudes) after both Mexico and Venezuela raised heavy oil prices by US $0.50 per barrel (bbl). Energy Detente refining netback data for the first half of October are presented for the US Gulf Coast and the US West Coast. The fuel price/tax series and the industrial fuel prices for October 1984 are included for countries of the Eastern Hemisphere.« less

  19. Research On Vehicle-Based Driver Status/Performance Monitoring; Development, Validation, And Refinement Of Algorithms For Detection Of Driver Drowsiness, Final Report

    DOT National Transportation Integrated Search

    1994-12-01

    THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...

  20. Solving radiative transfer with line overlaps using Gauss-Seidel algorithms

    NASA Astrophysics Data System (ADS)

    Daniel, F.; Cernicharo, J.

    2008-09-01

    Context: The improvement in observational facilities requires refining the modelling of the geometrical structures of astrophysical objects. Nevertheless, for complex problems such as line overlap in molecules showing hyperfine structure, a detailed analysis still requires a large amount of computing time and thus, misinterpretation cannot be dismissed due to an undersampling of the whole space of parameters. Aims: We extend the discussion of the implementation of the Gauss-Seidel algorithm in spherical geometry and include the case of hyperfine line overlap. Methods: We first review the basics of the short characteristics method that is used to solve the radiative transfer equations. Details are given on the determination of the Lambda operator in spherical geometry. The Gauss-Seidel algorithm is then described and, by analogy to the plan-parallel case, we see how to introduce it in spherical geometry. Doing so requires some approximations in order to keep the algorithm competitive. Finally, line overlap effects are included. Results: The convergence speed of the algorithm is compared to the usual Jacobi iterative schemes. The gain in the number of iterations is typically factors of 2 and 4 for the two implementations made of the Gauss-Seidel algorithm. This is obtained despite the introduction of approximations in the algorithm. A comparison of results obtained with and without line overlaps for N2H^+, HCN, and HNC shows that the J=3-2 line intensities are significantly underestimated in models where line overlap is neglected.

  1. REFINE WETLAND REGULATORY PROGRAM

    EPA Science Inventory

    The Tribes will work toward refining a regulatory program by taking a draft wetland conservation code with permitting incorporated to TEB for review. Progress will then proceed in developing a permit tracking system that will track both Tribal and fee land sites within reservati...

  2. Choices, Frameworks and Refinement

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter

    1991-01-01

    In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.

  3. 30 CFR 208.4 - Royalty oil sales to eligible refiners.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Royalty oil sales to eligible refiners. 208.4... MANAGEMENT SALE OF FEDERAL ROYALTY OIL General Provisions § 208.4 Royalty oil sales to eligible refiners. (a... and defense. The Secretary will review these items and will determine whether eligible refiners have...

  4. Hirshfeld atom refinement

    PubMed Central

    Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-01-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177

  5. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    McHugh, Martin J.; Gordley, Larry L.; Russell, James M., III; Hervig, Mark E.

    1999-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth UARS Science Investigator Program entitled "HALOE Algorithm Improvements for Upper Tropospheric Soundings." The goal of this effort is to develop and implement major inversion and processing improvements that will extend HALOE measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first-year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multi-channel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  6. HALOE Algorithm Improvements for Upper Tropospheric Sounding

    NASA Technical Reports Server (NTRS)

    Thompson, Robert Earl; McHugh, Martin J.; Gordley, Larry L.; Hervig, Mark E.; Russell, James M., III; Douglass, Anne (Technical Monitor)

    2001-01-01

    This report details the ongoing efforts by GATS, Inc., in conjunction with Hampton University and University of Wyoming, in NASA's Mission to Planet Earth Upper Atmospheric Research Satellite (UARS) Science Investigator Program entitled 'HALOE Algorithm Improvements for Upper Tropospheric Sounding.' The goal of this effort is to develop and implement major inversion and processing improvements that will extend Halogen Occultation Experiment (HALOE) measurements further into the troposphere. In particular, O3, H2O, and CH4 retrievals may be extended into the middle troposphere, and NO, HCl and possibly HF into the upper troposphere. Key areas of research being carried out to accomplish this include: pointing/tracking analysis; cloud identification and modeling; simultaneous multichannel retrieval capability; forward model improvements; high vertical-resolution gas filter channel retrievals; a refined temperature retrieval; robust error analyses; long-term trend reliability studies; and data validation. The current (first year) effort concentrates on the pointer/tracker correction algorithms, cloud filtering and validation, and multichannel retrieval development. However, these areas are all highly coupled, so progress in one area benefits from and sometimes depends on work in others.

  7. Refining glass structure in two dimensions

    NASA Astrophysics Data System (ADS)

    Sadjadi, Mahdi; Bhattarai, Bishal; Drabold, D. A.; Thorpe, M. F.; Wilson, Mark

    2017-11-01

    Recently determined atomistic scale structures of near-two dimensional bilayers of vitreous silica (using scanning probe and electron microscopy) allow us to refine the experimentally determined coordinates to incorporate the known local chemistry more precisely. Further refinement is achieved by using classical potentials of varying complexity: one using harmonic potentials and the second employing an electrostatic description incorporating polarization effects. These are benchmarked against density functional calculations. Our main findings are that (a) there is a symmetry plane between the two disordered layers, a nice example of an emergent phenomena, (b) the layers are slightly tilted so that the Si-O-Si angle between the two layers is not 180∘ as originally thought but rather 175 ±2∘ , and (c) while interior areas that are not completely imagined can be reliably reconstructed, surface areas are more problematic. It is shown that small crystallites that appear are just as expected statistically in a continuous random network. This provides a good example of the value that can be added to disordered structures imaged at the atomic level by implementing computer refinement.

  8. Coloured Petri Net Refinement Specification and Correctness Proof with Coq

    NASA Technical Reports Server (NTRS)

    Choppy, Christine; Mayero, Micaela; Petrucci, Laure

    2009-01-01

    In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.

  9. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... commercial delivery: U.S. EPA, Attn: Sulfur Program (6406J), 501 3rd Street, NW, Washington, DC 20001. (c.... The information submitted must show that the refiner employed an average of no more than 1500 people...

  10. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  11. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  12. U.S. Refining Capacity Utilization

    EIA Publications

    1995-01-01

    This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.

  13. Using supercritical fluids to refine hydrocarbons

    DOEpatents

    Yarbro, Stephen Lee

    2015-06-09

    A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.

  14. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  15. An Evolutionary Algorithm for Fast Intensity Based Image Matching Between Optical and SAR Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias

    2018-04-01

    This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  16. A refined Frequency Domain Decomposition tool for structural modal monitoring in earthquake engineering

    NASA Astrophysics Data System (ADS)

    Pioldi, Fabio; Rizzi, Egidio

    2017-07-01

    Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.

  17. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  18. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    NASA Technical Reports Server (NTRS)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  19. A feature-preserving hair removal algorithm for dermoscopy images.

    PubMed

    Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar

    2013-02-01

    Accurate segmentation and repair of hair-occluded information from dermoscopy images are challenging tasks for computer-aided detection (CAD) of melanoma. Currently, many hair-restoration algorithms have been developed, but most of these fail to identify hairs accurately and their removal technique is slow and disturbs the lesion's pattern. In this article, a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs. Our algorithm is based on three major steps: the rough hairs are segmented using a matched filtering with first derivative of gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques, which are repaired through a fast marching inpainting method. Diagnostic accuracy (DA) and texture-quality measure (TQM) metrics are utilized based on dermatologist-drawn manual hair masks that were used as a ground truth to evaluate the performance of the system. The hair-restoration algorithm is tested on 100 dermoscopy images. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (PDE), and (iii) exemplar-based repairing techniques. Among different hair detection and removal techniques, our proposed algorithm obtained the highest value of DA: 93.3% and TQM: 90%. The experimental results indicate that the proposed algorithm is highly accurate, robust and able to restore hair pixels without damaging the lesion texture. This method is fully automatic and can be easily integrated into a CAD system. © 2011 John Wiley & Sons A/S.

  20. Three-Dimensional Stable Nonorthogonal FDTD Algorithm with Adaptive Mesh Refinement for Solving Maxwell’s Equations

    DTIC Science & Technology

    2013-03-01

    Räisänen. An efficient FDTD algorithm for the analysis of microstrip patch antennas printed on a general anisotropic dielectric substrate. IEEE...applications [3, 21, 22], including antenna , microwave circuits, geophysics, optics, etc. The Ground Penetrating Radar (GPR) is a popular and...IEEE Trans. Antennas Propag., 41:994–999, 1993. 16 [6] S. G. Garcia, T. M. Hung-Bao, R. G. Martin, and B. G. Olmedo. On the application of finite

  1. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  2. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  3. A Refinement of the McMillen (1988) Recursive Digital Filter for the Analysis of Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Falocchi, Marco; Giovannini, Lorenzo; Franceschi, Massimiliano de; Zardi, Dino

    2018-05-01

    We present a refinement of the recursive digital filter proposed by McMillen (Boundary-Layer Meteorol 43:231-245, 1988), for separating surface-layer turbulence from low-frequency fluctuations affecting the mean flow, especially over complex terrain. In fact, a straightforward application of the filter causes both an amplitude attenuation and a forward phase shift in the filtered signal. As a consequence turbulence fluctuations, evaluated as the difference between the original series and the filtered one, as well as higher-order moments calculated from them, may be affected by serious inaccuracies. The new algorithm (i) produces a rigorous zero-phase filter, (ii) restores the amplitude of the low-frequency signal, and (iii) corrects all filter-induced signal distortions.

  4. ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallberg, J.; Stagg, A.

    2000-10-01

    A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less

  5. A freestream-preserving fourth-order finite-volume method in mapped coordinates with adaptive-mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Gao, Xinfeng; Owen, Landon D.; ...

    2015-12-20

    We present a fourth-order accurate finite-volume method for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Some novel considerations for formulating the semi-discrete system of equations in computational space are combined with detailed mechanisms for accommodating the adapting grids. Furthermore, these considerations ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). The solution in time is advanced with a fourth-order Runge-Kutta method. A series of tests verifies that the expected accuracy is achieved in smooth flows and the solution ofmore » a Mach reflection problem demonstrates the effectiveness of the algorithm in resolving strong discontinuities.« less

  6. REFINING FLUORINATED COMPOUNDS

    DOEpatents

    Linch, A.L.

    1963-01-01

    This invention relates to the method of refining a liquid perfluorinated hydrocarbon oil containing fluorocarbons from 12 to 28 carbon atoms per molecule by distilling between 150 deg C and 300 deg C at 10 mm Hg absolute pressure. The perfluorinated oil is washed with a chlorinated lower aliphatic hydrocarbon, which mairtains a separate liquid phase when mixed with the oil. Impurities detrimental to the stability of the oil are extracted by the chlorinated lower aliphatic hydrocarbon. (AEC)

  7. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  8. Influence of Mg on Grain Refinement of Near Eutectic Al-Si Alloys

    NASA Astrophysics Data System (ADS)

    Ravi, K. R.; Manivannan, S.; Phanikumar, G.; Murty, B. S.; Sundarraj, Suresh

    2011-07-01

    Although the grain-refinement practice is well established for wrought Al alloys, in the case of foundry alloys such as near eutectic Al-Si alloys, the underlying mechanisms and the use of grain refiners need better understanding. Conventional grain refiners such as Al-5Ti-1B are not effective in grain refining the Al-Si alloys due to the poisoning effect of Si. In this work, we report the results of a newly developed grain refiner, which can effectively grain refine as well as modify eutectic and primary Si in near eutectic Al-Si alloys. Among the material choices, the grain refining response with Al-1Ti-3B master alloy is found to be superior compared to the conventional Al-5Ti-1B master alloy. It was also found that magnesium additions of 0.2 wt pct along with the Al-1Ti-3B master alloy further enhance the near eutectic Al-Si alloy's grain refining efficiency, thus leading to improved bulk mechanical properties. We have found that magnesium essentially scavenges the oxygen present on the surface of nucleant particles, improves wettability, and reduces the agglomeration tendency of boride particles, thereby enhancing grain refining efficiency. It allows the nucleant particles to act as potent and active nucleation sites even at levels as low as 0.2 pct in the Al-1Ti-3B master alloy.

  9. Range pattern matching with layer operations and continuous refinements

    NASA Astrophysics Data System (ADS)

    Tseng, I.-Lun; Lee, Zhao Chuan; Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Ong, Jonathan Yoong Seang

    2018-03-01

    At advanced and mainstream process nodes (e.g., 7nm, 14nm, 22nm, and 55nm process nodes), lithography hotspots can exist in layouts of integrated circuits even if the layouts pass design rule checking (DRC). Existence of lithography hotspots in a layout can cause manufacturability issues, which can result in yield losses of manufactured integrated circuits. In order to detect lithography hotspots existing in physical layouts, pattern matching (PM) algorithms and commercial PM tools have been developed. However, there are still needs to use DRC tools to perform PM operations. In this paper, we propose a PM synthesis methodology, which uses a continuous refinement technique, for the automatic synthesis of a given lithography hotspot pattern into a DRC deck, which consists of layer operation commands, so that an equivalent PM operation can be performed by executing the synthesized deck with the use of a DRC tool. Note that the proposed methodology can deal with not only exact patterns, but also range patterns. Also, lithography hotspot patterns containing multiple layers can be processed. Experimental results show that the proposed methodology can accurately and efficiently detect lithography hotspots in physical layouts.

  10. Creation of parallel algorithms for the solution of problems of gas dynamics on multi-core computers and GPU

    NASA Astrophysics Data System (ADS)

    Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.

    2013-10-01

    The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].

  11. Implementations of back propagation algorithm in ecosystems applications

    NASA Astrophysics Data System (ADS)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert

  12. Improving the accuracy of macromolecular structure refinement at 7 Å resolution.

    PubMed

    Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F

    2012-06-06

    In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  14. Method for refining contaminated iridium

    DOEpatents

    Heshmatpour, B.; Heestand, R.L.

    1982-08-31

    Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.

  15. Evolving spiking neural networks: a novel growth algorithm exhibits unintelligent design

    NASA Astrophysics Data System (ADS)

    Schaffer, J. David

    2015-06-01

    Spiking neural networks (SNNs) have drawn considerable excitement because of their computational properties, believed to be superior to conventional von Neumann machines, and sharing properties with living brains. Yet progress building these systems has been limited because we lack a design methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm (evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n) where n is the number of neurons; n is also evolved. The genome not only specifies the network topology, but all its parameters as well. Experiments show the algorithm producing SNNs that effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for central pattern generators. Even though evolution did not include perturbations of the input spike trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be exploited by network additions that could use this information for refined decision making if required. On a second task, a sequence detector, a discriminating design was found that might be considered an example of "unintelligent design"; extra non-functional neurons were included that, while inefficient, did not hamper its proper functioning.

  16. The Influence of Grain Refiners on the Efficiency of Ceramic Foam Filters

    NASA Astrophysics Data System (ADS)

    Towsey, Nicholas; Schneider, Wolfgang; Krug, Hans-Peter; Hardman, Angela; Keegan, Neil J.

    An extensive program of work has been carried out to evaluate the efficiency of ceramic foam filters under carefully controlled conditions. Work reported at previous TMS meetings showed that in the absence of grain refiners, ceramic foam filters have the capacity for high filtration efficiency and consistent, reliable performance. The current phase of the investigation focuses on the impact grain refiner additions have on filter performance. The high filtration efficiencies obtained using 50 or 80ppi CFF's in the absence of grain refiners diminish when Al-3%Ti-1%B grain refiners are added. This, together with the impact of incoming inclusion loading on filter performance and the level of grain refiner addition are considered in detail. The new generation Al-3%Ti-0.15%C grain refiner has also been included. At typical addition levels (1kg/tonne) the effect on filter efficiency is similar to that for TiB2based grain refiners. The work was again conducted on a production scale using AA1050 alloy. Metal quality was determined using LiMCA and PoDFA. Spent filters were also analysed.

  17. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

  18. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU

  19. Improved ligand geometries in crystallographic refinement using AFITT in PHENIX

    DOE PAGES

    Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; ...

    2016-08-31

    Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows.PHENIX–AFITTrefinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentiallymore » difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows.PHENIX–AFITTrefinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combiningAFITTand thePHENIXsoftware suite on a data set of 189 protein–ligand PDB structures are presented. Refinements usingPHENIX–AFITTsignificantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. Finally, for the data presented,PHENIX–AFITTrefinements result in more chemically accurate models for small-molecule ligands.« less

  20. 40 CFR 80.1622 - Approval for small refiner and small volume refinery status.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... appropriate data to correct the record when the company submits its application. (ii) Foreign small refiners... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Approval for small refiner and small... Approval for small refiner and small volume refinery status. (a) Applications for small refiner or small...

  1. PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability

    PubMed Central

    Kirby, Jacqueline C; Speltz, Peter; Rasmussen, Luke V; Basford, Melissa; Gottesman, Omri; Peissig, Peggy L; Pacheco, Jennifer A; Tromp, Gerard; Pathak, Jyotishman; Carrell, David S; Ellis, Stephen B; Lingren, Todd; Thompson, Will K; Savova, Guergana; Haines, Jonathan; Roden, Dan M; Harris, Paul A

    2016-01-01

    Objective Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems. Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. Results As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). Discussion These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. Conclusion By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data. PMID:27026615

  2. Algorithm for automatic forced spirometry quality assessment: technological developments.

    PubMed

    Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere

    2014-01-01

    We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.

  3. Germs and Hygiene - Multiple Languages

    MedlinePlus

    ... አማርኛ ) Expand Section Cleaning to Prevent the Flu - English PDF Cleaning to Prevent the Flu - Amarɨñña / አማርኛ ( ... Disease Control and Prevention Fight the Flu Poster - English PDF Fight the Flu Poster - Amarɨñña / አማርኛ (Amharic) ...

  4. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  5. A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem

    NASA Astrophysics Data System (ADS)

    Pourrahimian, Parinaz

    2017-11-01

    Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.

  6. Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon

    Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.

  7. Optimization of Melt Treatment for Austenitic Steel Grain Refinement

    NASA Astrophysics Data System (ADS)

    Lekakh, Simon N.; Ge, Jun; Richards, Von; O'Malley, Ron; TerBush, Jessica R.

    2017-02-01

    Refinement of the as-cast grain structure of austenitic steels requires the presence of active solid nuclei during solidification. These nuclei can be formed in situ in the liquid alloy by promoting reactions between transition metals (Ti, Zr, Nb, and Hf) and metalloid elements (C, S, O, and N) dissolved in the melt. Using thermodynamic simulations, experiments were designed to evaluate the effectiveness of a predicted sequence of reactions targeted to form precipitates that could act as active nuclei for grain refinement in austenitic steel castings. Melt additions performed to promote the sequential precipitation of titanium nitride (TiN) onto previously formed spinel (Al2MgO4) inclusions in the melt resulted in a significant refinement of the as-cast grain structure in heavy section Cr-Ni-Mo stainless steel castings. A refined as-cast structure consisting of an inner fine-equiaxed grain structure and outer columnar dendrite zone structure of limited length was achieved in experimental castings. The sequential of precipitation of TiN onto Al2MgO4 was confirmed using automated SEM/EDX and TEM analyses.

  8. Implications for Child Bilingual Acquisition, Optionality and Transfer

    ERIC Educational Resources Information Center

    Serratrice, Ludovica

    2014-01-01

    Amaral & Roeper's Multiple Grammars (MG) proposal offers an appealingly simple way of thinking about the linguistic representations of bilingual speakers. This article presents a commentary on the MG language acquisition theory proposed by Luiz Amaral and Tom Roeper in this issue, focusing on the theory's implications for child…

  9. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  10. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  11. Refining As-cast β-Ti Grains Through ZrN Inoculation

    NASA Astrophysics Data System (ADS)

    Qiu, Dong; Zhang, Duyao; Easton, Mark A.; St John, David H.; Gibson, Mark A.

    2018-03-01

    The columnar-to-equiaxed transition and remarkable refinement of β-Ti grains occur in an as-cast Ti-13Mo alloy when a new grain refiner, ZrN, was inoculated at a nitrogen level as low as 0.4 wt pct. The grain refining effect is attributed to in situ-formed TiN particles that provide active nucleation sites and solute Zr that promotes constitutional supercooling. Reproducible orientation relationships were identified between TiN nucleants and β-Ti matrix, and well explained by the edge-to-edge matching model.

  12. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  13. A grid-enabled web service for low-resolution crystal structure refinement.

    PubMed

    O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr

    2012-03-01

    Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.

  14. Noninvasive evaluation of mental stress using by a refined rough set technique based on biomedical signals.

    PubMed

    Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng

    2014-06-01

    Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating

  15. Initiating technical refinements in high-level golfers: Evidence for contradictory procedures.

    PubMed

    Carson, Howie J; Collins, Dave; Richards, Jim

    2016-01-01

    When developing motor skills there are several outcomes available to an athlete depending on their skill status and needs. Whereas the skill acquisition and performance literature is abundant, an under-researched outcome relates to the refinement of already acquired and well-established skills. Contrary to current recommendations for athletes to employ an external focus of attention and a representative practice design,  Carson and  Collins' (2011) [Refining and regaining skills in fixation/diversification stage performers: The Five-A Model. International Review of Sport and Exercise Psychology, 4, 146-167. doi: 10.1080/1750984x.2011.613682 ] Five-A Model requires an initial narrowed internal focus on the technical aspect needing refinement: the implication being that environments which limit external sources of information would be beneficial to achieving this task. Therefore, the purpose of this paper was to (1) provide a literature-based explanation for why techniques counter to current recommendations may be (temporarily) appropriate within the skill refinement process and (2) provide empirical evidence for such efficacy. Kinematic data and self-perception reports are provided from high-level golfers attempting to consciously initiate technical refinements while executing shots onto a driving range and into a close proximity net (i.e. with limited knowledge of results). It was hypothesised that greater control over intended refinements would occur when environmental stimuli were reduced in the most unrepresentative practice condition (i.e. hitting into a net). Results confirmed this, as evidenced by reduced intra-individual movement variability for all participants' individual refinements, despite little or no difference in mental effort reported. This research offers coaches guidance when working with performers who may find conscious recall difficult during the skill refinement process.

  16. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  17. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  18. Corporate Entrepreneurship Assessment Instrument (CEAI): Refinement and Validation of a Survey Measure

    DTIC Science & Technology

    2007-03-01

    CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): REFINEMENT AND VALIDATION OF A SURVEY MEASURE...States Government. AFIT/GIR/ENV/07-M7 CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): REFINEMENT AND VALIDATION OF A SURVEY MEASURE...UNLIMITED AFIT/GIR/ENV/07-M7 CORPORATE ENTREPRENEURSHIP ASSESSMENT INSTRUMENT (CEAI): REFINEMENT AND VALIDATION OF A SURVEY MEASURE Michael

  19. Quadtree of TIN: a new algorithm of dynamic LOD

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  20. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  1. PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability.

    PubMed

    Kirby, Jacqueline C; Speltz, Peter; Rasmussen, Luke V; Basford, Melissa; Gottesman, Omri; Peissig, Peggy L; Pacheco, Jennifer A; Tromp, Gerard; Pathak, Jyotishman; Carrell, David S; Ellis, Stephen B; Lingren, Todd; Thompson, Will K; Savova, Guergana; Haines, Jonathan; Roden, Dan M; Harris, Paul A; Denny, Joshua C

    2016-11-01

    Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems.Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Construction and Application of a Refined Hospital Management Chain.

    PubMed

    Lihua, Yi

    2016-01-01

    Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain

  3. Using supercritical fluids to refine hydrocarbons

    DOEpatents

    Yarbro, Stephen Lee

    2014-11-25

    This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.

  4. Refined 3d-3d correspondence

    NASA Astrophysics Data System (ADS)

    Alday, Luis F.; Genolini, Pietro Benetti; Bullimore, Mathew; van Loon, Mark

    2017-04-01

    We explore aspects of the correspondence between Seifert 3-manifolds and 3d N = 2 supersymmetric theories with a distinguished abelian flavour symmetry. We give a prescription for computing the squashed three-sphere partition functions of such 3d N = 2 theories constructed from boundary conditions and interfaces in a 4d N = 2∗ theory, mirroring the construction of Seifert manifold invariants via Dehn surgery. This is extended to include links in the Seifert manifold by the insertion of supersymmetric Wilson-'t Hooft loops in the 4d N = 2∗ theory. In the presence of a mass parameter cfor the distinguished flavour symmetry, we recover aspects of refined Chern-Simons theory with complex gauge group, and in particular construct an analytic continuation of the S-matrix of refined Chern-Simons theory.

  5. Aerodynamic design optimization via reduced Hessian SQP with solution refining

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.

  6. Refined numerical solution of the transonic flow past a wedge

    NASA Technical Reports Server (NTRS)

    Liang, S.-M.; Fung, K.-Y.

    1985-01-01

    A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.

  7. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  8. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  9. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  10. Diffraction-geometry refinement in the DIALS framework

    DOE PAGES

    Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...

    2016-03-30

    Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less

  11. Macromolecular refinement by model morphing using non-atomic parameterizations.

    PubMed

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  12. Mesh-free data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism

    DOE PAGES

    Slattery, Stuart R.

    2015-12-02

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  13. 40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to...

  14. 40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to...

  15. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  16. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  17. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  18. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  19. 40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining...

  20. Programming Deep Brain Stimulation for Parkinson's Disease: The Toronto Western Hospital Algorithms.

    PubMed

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Puppi Munhoz, Renato; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an established and effective treatment for Parkinson's disease (PD). After surgery, a number of extensive programming sessions are performed to define the most optimal stimulation parameters. Programming sessions mainly rely only on neurologist's experience. As a result, patients often undergo inconsistent and inefficient stimulation changes, as well as unnecessary visits. We reviewed the literature on initial and follow-up DBS programming procedures and integrated our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We propose four algorithms including the initial programming and specific algorithms tailored to symptoms experienced by patients following DBS: speech disturbances, stimulation-induced dyskinesia and gait impairment. We conducted a literature search of PubMed from inception to July 2014 with the keywords "deep brain stimulation", "festination", "freezing", "initial programming", "Parkinson's disease", "postural instability", "speech disturbances", and "stimulation induced dyskinesia". Seventy papers were considered for this review. Based on the literature review and our experience at TWH, we refined four algorithms for: (1) the initial programming stage, and management of symptoms following DBS, particularly addressing (2) speech disturbances, (3) stimulation-induced dyskinesia, and (4) gait impairment. We propose four algorithms tailored to an individualized approach to managing symptoms associated with DBS and disease progression in patients with PD. We encourage established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  2. A refined orbit for the satellite of asteroid (107) Camilla

    NASA Astrophysics Data System (ADS)

    Pajuelo, Myriam Virginia; Carry, Benoit; Vachier, Frederic; Berthier, Jerome; Descamp, Pascal; Merline, William J.; Tamblyn, Peter M.; Conrad, Al; Storrs, Alex; Margot, Jean-Luc; Marchis, Frank; Kervella, Pierre; Girard, Julien H.

    2015-11-01

    The satellite of the Cybele asteroid (107) Camilla was discovered in March 2001 using the Hubble Space Telescope (Storrs et al., 2001, IAUC 7599). From a set of 23 positions derived from adaptive optics observations obtained over three years with the ESO VLT, Keck-II and Gemini-North telescopes, Marchis et al. (2008, Icarus 196) determined its orbit to be nearly circular.In the new work reported here, we compiled, reduced, and analyzed observations at 39 epochs (including the 23 positions previously analyzed) by adding additional observations taken from data archives: HST in 2001; Keck in 2002, 2003, and 2009; Gemini in 2010; and VLT in 2011. The present dataset hence contains twice as many epochs as the prior analysis and covers a time span that is three times longer (more than a decade).We use our orbit determination algorithm Genoid (GENetic Orbit IDentification), a genetic based algorithm that relies on a metaheuristic method and a dynamical model of the Solar System (Vachier et al., 2012, A&A 543). The method uses two models: a simple Keplerian model to minimize the search-time for an orbital solution, exploring a wide space of solutions; and a full N-body problem that includes the gravitational field of the primary asteroid up to 4th order.The orbit we derive fits all 39 observed positions of the satellite with an RMS residual of only milli-arcseconds, which corresponds to sub-pixel accuracy. We found the orbit of the satellite to be circular and roughly aligned with the equatorial plane of Camilla. The refined mass of the system is (12 ± 1) x 10^18 kg, for an orbital period of 3.71 days.We will present this improved orbital solution of the satellite of Camilla, as well as predictions for upcoming stellar occultation events.

  3. A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms

    NASA Astrophysics Data System (ADS)

    Hasbestan, Jaber J.; Senocak, Inanc

    2017-12-01

    Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.

  4. Reaching extended length-scales with accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Shim, Yunsic; Amar, Jacques

    2012-02-01

    While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).

  5. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  6. Segregation Coefficients of Impurities in Selenium by Zone Refining

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua; Sha, Yi-Gao

    1998-01-01

    The purification of Se by zone refining process was studied. The impurity solute levels along the length of a zone-refined Se sample were measured by spark source mass spectrographic analysis. By comparing the experimental concentration levels with theoretical curves the segregation coefficient, defined as the ratio of equilibrium concentration of a given solute in the solid to that in the liquid, k = x(sub s)/x(sub l) for most of the impurities in Se are found to be close to unity, i.e., between 0.85 and 1.15, with the k value for Si, Zn, Fe, Na and Al greater than 1 and that for S, Cl, Ca, P, As, Mn and Cr less than 1. This implies that a large number of passes is needed for the successful implementation of zone refining in the purification of Se.

  7. Child Nutrition - Multiple Languages

    MedlinePlus

    ... Amarɨñña / አማርኛ ) Expand Section Foods For Healthy Teeth - English PDF Foods For Healthy Teeth - Amarɨñña / አማርኛ (Amharic) ... Arabic (العربية) Expand Section Foods For Healthy Teeth - English PDF Foods For Healthy Teeth - العربية (Arabic) PDF ...

  8. Impact of Environmental Compliance Costs on U.S. Refining Profitability 1995-2001

    EIA Publications

    2003-01-01

    This report assesses the effects of pollution abatement requirements on the financial performance of U.S. petroleum refining and marketing operations during the 1995 to 2001 period. This study is a follow-up to the October 1997 publication entitled The Impact of Environmental Compliance Costs on U.S. Refining Profitability, that focused on the financial impacts of U.S. refining pollution abatement investment requirements in the 1988 to1995 period.

  9. 19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... liquidation of the entry for losses on copper, lead, and zinc content of metal-bearing materials, pursuant to...

  10. 19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 19 Customs Duties 1 2011-04-01 2011-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... liquidation of the entry for losses on copper, lead, and zinc content of metal-bearing materials, pursuant to...

  11. A scalable, fully implicit algorithm for the reduced two-field low-β extended MHD model

    DOE PAGES

    Chacon, Luis; Stanier, Adam John

    2016-12-01

    Here, we demonstrate a scalable fully implicit algorithm for the two-field low-β extended MHD model. This reduced model describes plasma behavior in the presence of strong guide fields, and is of significant practical impact both in nature and in laboratory plasmas. The model displays strong hyperbolic behavior, as manifested by the presence of fast dispersive waves, which make a fully implicit treatment very challenging. In this study, we employ a Jacobian-free Newton–Krylov nonlinear solver, for which we propose a physics-based preconditioner that renders the linearized set of equations suitable for inversion with multigrid methods. As a result, the algorithm ismore » shown to scale both algorithmically (i.e., the iteration count is insensitive to grid refinement and timestep size) and in parallel in a weak-scaling sense, with the wall-clock time scaling weakly with the number of cores for up to 4096 cores. For a 4096 × 4096 mesh, we demonstrate a wall-clock-time speedup of ~6700 with respect to explicit algorithms. The model is validated linearly (against linear theory predictions) and nonlinearly (against fully kinetic simulations), demonstrating excellent agreement.« less

  12. Process for solvent refining of coal using a denitrogenated and dephenolated solvent

    DOEpatents

    Garg, Diwakar; Givens, Edwin N.; Schweighardt, Frank K.

    1984-01-01

    A process is disclosed for the solvent refining of non-anthracitic coal at elevated temperatures and pressure in a hydrogen atmosphere using a hydrocarbon solvent which before being recycled in the solvent refining process is subjected to chemical treatment to extract substantially all nitrogenous and phenolic constituents from the solvent so as to improve the conversion of coal and the production of oil in the solvent refining process. The solvent refining process can be either thermal or catalytic. The extraction of nitrogenous compounds can be performed by acid contact such as hydrogen chloride or fluoride treatment, while phenolic extraction can be performed by caustic contact or contact with a mixture of silica and alumina.

  13. Dinosaurs can fly -- High performance refining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or areamore » teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.« less

  14. Non-Markovianity-assisted high-fidelity Deutsch-Jozsa algorithm in diamond

    NASA Astrophysics Data System (ADS)

    Dong, Yang; Zheng, Yu; Li, Shen; Li, Cong-Cong; Chen, Xiang-Dong; Guo, Guang-Can; Sun, Fang-Wen

    2018-01-01

    The memory effects in non-Markovian quantum dynamics can induce the revival of quantum coherence, which is believed to provide important physical resources for quantum information processing (QIP). However, no real quantum algorithms have been demonstrated with the help of such memory effects. Here, we experimentally implemented a non-Markovianity-assisted high-fidelity refined Deutsch-Jozsa algorithm (RDJA) with a solid spin in diamond. The memory effects can induce pronounced non-monotonic variations in the RDJA results, which were confirmed to follow a non-Markovian quantum process by measuring the non-Markovianity of the spin system. By applying the memory effects as physical resources with the assistance of dynamical decoupling, the probability of success of RDJA was elevated above 97% in the open quantum system. This study not only demonstrates that the non-Markovianity is an important physical resource but also presents a feasible way to employ this physical resource. It will stimulate the application of the memory effects in non-Markovian quantum dynamics to improve the performance of practical QIP.

  15. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  16. Assessing food allergy risks from residual peanut protein in highly refined vegetable oil.

    PubMed

    Blom, W Marty; Kruizinga, Astrid G; Rubingh, Carina M; Remington, Ben C; Crevel, René W R; Houben, Geert F

    2017-08-01

    Refined vegetable oils including refined peanut oil are widely used in foods. Due to shared production processes, refined non-peanut vegetable oils can contain residual peanut proteins. We estimated the predicted number of allergic reactions to residual peanut proteins using probabilistic risk assessment applied to several scenarios involving food products made with vegetable oils. Variables considered were: a) the estimated production scale of refined peanut oil, b) estimated cross-contact between refined vegetable oils during production, c) the proportion of fat in representative food products and d) the peanut protein concentration in refined peanut oil. For all products examined the predicted risk of objective allergic reactions in peanut-allergic users of the food products was extremely low. The number of predicted reactions ranged depending on the model from a high of 3 per 1000 eating occasions (Weibull) to no reactions (LogNormal). Significantly, all reactions were predicted for allergen intakes well below the amounts reported for the most sensitive individual described in the clinical literature. We conclude that the health risk from cross-contact between vegetable oils and refined peanut oil is negligible. None of the food products would warrant precautionary labelling for peanut according to the VITAL ® programme of the Allergen Bureau. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. CRISPRDetect: A flexible algorithm to define CRISPR arrays.

    PubMed

    Biswas, Ambarish; Staals, Raymond H J; Morales, Sergio E; Fineran, Peter C; Brown, Chris M

    2016-05-17

    CRISPR (clustered regularly interspaced short palindromic repeats) RNAs provide the specificity for noncoding RNA-guided adaptive immune defence systems in prokaryotes. CRISPR arrays consist of repeat sequences separated by specific spacer sequences. CRISPR arrays have previously been identified in a large proportion of prokaryotic genomes. However, currently available detection algorithms do not utilise recently discovered features regarding CRISPR loci. We have developed a new approach to automatically detect, predict and interactively refine CRISPR arrays. It is available as a web program and command line from bioanalysis.otago.ac.nz/CRISPRDetect. CRISPRDetect discovers putative arrays, extends the array by detecting additional variant repeats, corrects the direction of arrays, refines the repeat/spacer boundaries, and annotates different types of sequence variations (e.g. insertion/deletion) in near identical repeats. Due to these features, CRISPRDetect has significant advantages when compared to existing identification tools. As well as further support for small medium and large repeats, CRISPRDetect identified a class of arrays with 'extra-large' repeats in bacteria (repeats 44-50 nt). The CRISPRDetect output is integrated with other analysis tools. Notably, the predicted spacers can be directly utilised by CRISPRTarget to predict targets. CRISPRDetect enables more accurate detection of arrays and spacers and its gff output is suitable for inclusion in genome annotation pipelines and visualisation. It has been used to analyse all complete bacterial and archaeal reference genomes.

  18. Refinement of the ICRF

    NASA Technical Reports Server (NTRS)

    Ma, Chopo

    2004-01-01

    Since the ICRF was generated in 1995, VLBI modeling and estimation, data quality: source position stability analysis, and supporting observational programs have improved markedly. There are developing and potential applications in the areas of space navigation Earth orientation monitoring and optical astrometry from space that would benefit from a refined ICRF with enhanced accuracy, stability and spatial distribution. The convergence of analysis, focused observations, and astrometric needs should drive the production of a new realization in the next few years.

  19. GRAIN REFINEMENT OF URANIUM BILLETS

    DOEpatents

    Lewis, L.

    1964-02-25

    A method of refining the grain structure of massive uranium billets without resort to forging is described. The method consists in the steps of beta- quenching the billets, annealing the quenched billets in the upper alpha temperature range, and extrusion upset of the billets to an extent sufficient to increase the cross sectional area by at least 5 per cent. (AEC)

  20. NMRe: a web server for NMR protein structure refinement with high-quality structure validation scores.

    PubMed

    Ryu, Hyojung; Lim, GyuTae; Sung, Bong Hyun; Lee, Jinhyuk

    2016-02-15

    Protein structure refinement is a necessary step for the study of protein function. In particular, some nuclear magnetic resonance (NMR) structures are of lower quality than X-ray crystallographic structures. Here, we present NMRe, a web-based server for NMR structure refinement. The previously developed knowledge-based energy function STAP (Statistical Torsion Angle Potential) was used for NMRe refinement. With STAP, NMRe provides two refinement protocols using two types of distance restraints. If a user provides NOE (Nuclear Overhauser Effect) data, the refinement is performed with the NOE distance restraints as a conventional NMR structure refinement. Additionally, NMRe generates NOE-like distance restraints based on the inter-hydrogen distances derived from the input structure. The efficiency of NMRe refinement was validated on 20 NMR structures. Most of the quality assessment scores of the refined NMR structures were better than those of the original structures. The refinement results are provided as a three-dimensional structure view, a secondary structure scheme, and numerical and graphical structure validation scores. NMRe is available at http://psb.kobic.re.kr/nmre/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  2. A refined definition improves the measurement reliability of the tip-apex distance.

    PubMed

    Sakagoshi, Daigo; Sawaguchi, Takeshi; Shima, Yosuke; Inoue, Daisuke; Oshima, Takeshi; Goldhahn, Sabine

    2016-07-01

    Tip-apex distance (TAD) is reported as a good predictor for cut-outs of lag screws and spiral blades in the treatment of intertrochanteric fractures, and surgeons are advised to strive for TAD within 20 mm. However, the femoral neck axis and the position of the lower limb in the lateral radiograph are not clearly defined and can lead to measurement errors. We propose a refined TAD by defining these factors. The objective of this study was to analyze the reliability of this refined TAD. The radiographs of 130 prospective cases with unstable trochanteric fractures were used for the analysis of the refined TAD. The refined TAD was independently measured by 2 raters with clinical experience of more than 10 years (rater 1, 2) and 2 raters with much less clinical experience (rater 3, 4) after they received a training about the new measurement method. Intraclass correlation coefficient (ICC [2,4]) was calculated to assess the interrater reliability. The mean refined TADs were 18.2:18.4:18.2:18.2 mm for rater 1:2:3:4. There was a strong correlation among all four raters (ICC 0.998, (95% CI: 0.998, 0.999). Regardless of the clinical experience of raters, the refined TAD is a reliable tool and can be used to develop new TAD recommendations for predicting failure of fixation. Future studies with larger samples are needed to evaluate the predictive value of the refined TAD. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  3. US refining sector still a whipping-boy: what will it take

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-01-27

    The fast moving US product markets are exerting a powerful pull on crude oil prices. This has meant unalleviated downward pressure on refining margins for most of the past year. Downstream of refining, product marketers want the lower rack and spot prices from refineries. Upstream, independent and major-integrated producers want the highest crude prices they can obtain, with the latter producers also wanting the highest product value realizations. Refiners, especially the major-integrated ones, are rooting for OPEC discipline louder than anybody else. This issue also contains the following: (1) weighted dollar values by product for total product barrel at variousmore » sites around the globe; (2) ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore for late January 1988; and (3) ED fuel price/tax series for both the Western and Eastern Hemispheres, Jan. 1988 edition. 5 figures, 18 tables.« less

  4. Profex: a graphical user interface for the Rietveld refinement program BGMN.

    PubMed

    Doebelin, Nicola; Kleeberg, Reinhard

    2015-10-01

    Profex is a graphical user interface for the Rietveld refinement program BGMN . Its interface focuses on preserving BGMN 's powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems.

  5. Profex: a graphical user interface for the Rietveld refinement program BGMN

    PubMed Central

    Doebelin, Nicola; Kleeberg, Reinhard

    2015-01-01

    Profex is a graphical user interface for the Rietveld refinement program BGMN. Its interface focuses on preserving BGMN’s powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems. PMID:26500466

  6. The indirect electrochemical refining of lunar ores

    NASA Technical Reports Server (NTRS)

    Semkow, Krystyna W.; Sammells, Anthony F.

    1987-01-01

    Recent work performed on an electrolytic cell is reported which addresses the implicit limitations in various approaches to refining lunar ores. The cell uses an oxygen vacancy conducting stabilized zirconia solid electrolyte to effect separation between a molten salt catholyte compartment where alkali metals are deposited, and an oxygen-evolving anode of composition La(0.89)Sr(0.1)MnO3. The cell configuration is shown and discussed along with a polarization curve and a steady-state current-voltage curve. In a practical cell, cathodically deposited liquid lithium would be continuously removed from the electrolytic cell and used as a valuable reducing agent for ore refining under lunar conditions. Oxygen would be indirectly electrochemically extracted from lunar ores for breathing purposes.

  7. Gross wood characteristics affecting properties of handsheets made from loblolly pine refiner groundwood

    Treesearch

    Charles W. McMillin

    1968-01-01

    Specific refining energy and gross wood properties accounted for as much as 90% of the total variation in strength of handsheets made from 96 pulps disk-refined from chips of varying characteristics. Burst, tear, and breaking length were increased by applying high specific refining energy and using fast-grown wood of high latewood content but of relatively low density...

  8. A new adaptive mesh refinement strategy for numerically solving evolutionary PDE's

    NASA Astrophysics Data System (ADS)

    Burgarelli, Denise; Kischinhevsky, Mauricio; Biezuner, Rodney Josue

    2006-11-01

    A graph-based implementation of quadtree meshes for dealing with adaptive mesh refinement (AMR) in the numerical solution of evolutionary partial differential equations is discussed using finite volume methods. The technique displays a plug-in feature that allows replacement of a group of cells in any region of interest for another one with arbitrary refinement, and with only local changes occurring in the data structure. The data structure is also specially designed to minimize the number of operations needed in the AMR. Implementation of the new scheme allows flexibility in the levels of refinement of adjacent regions. Moreover, storage requirements and computational cost compare competitively with mesh refinement schemes based on hierarchical trees. Low storage is achieved for only the children nodes are stored when a refinement takes place. These nodes become part of a graph structure, thus motivating the denomination autonomous leaves graph (ALG) for the new scheme. Neighbors can then be reached without accessing their parent nodes. Additionally, linear-system solvers based on the minimization of functionals can be easily employed. ALG was not conceived with any particular problem or geometry in mind and can thus be applied to the study of several phenomena. Some test problems are used to illustrate the effectiveness of the technique.

  9. Microstructures and Grain Refinement of Additive-Manufactured Ti- xW Alloys

    NASA Astrophysics Data System (ADS)

    Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian W.; Rolchigo, Matt R.; LeSar, Richard; Collins, Peter C.

    2017-07-01

    It is necessary to better understand the composition-processing-microstructure relationships that exist for materials produced by additive manufacturing. To this end, Laser Engineered Net Shaping (LENS™), a type of additive manufacturing, was used to produce a compositionally graded titanium binary model alloy system (Ti- xW specimen (0 ≤ x ≤ 30 wt pct), so that relationships could be made between composition, processing, and the prior beta grain size. Importantly, the thermophysical properties of the Ti- xW, specifically its supercooling parameter ( P) and growth restriction factor ( Q), are such that grain refinement is expected and was observed. The systematic, combinatorial study of this binary system provides an opportunity to assess the mechanisms by which grain refinement occurs in Ti-based alloys in general, and for additive manufacturing in particular. The operating mechanisms that govern the relationship between composition and grain size are interpreted using a model originally developed for aluminum and magnesium alloys and subsequently applied for titanium alloys. The prior beta grain factor observed and the interpretations of their correlations indicate that tungsten is a good grain refiner and such models are valid to explain the grain-refinement process. By extension, other binary elements or higher order alloy systems with similar thermophysical properties should exhibit similar grain refinement.

  10. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  11. Crystallization in lactose refining-a review.

    PubMed

    Wong, Shin Yee; Hartel, Richard W

    2014-03-01

    In the dairy industry, crystallization is an important separation process used in the refining of lactose from whey solutions. In the refining operation, lactose crystals are separated from the whey solution through nucleation, growth, and/or aggregation. The rate of crystallization is determined by the combined effect of crystallizer design, processing parameters, and impurities on the kinetics of the process. This review summarizes studies on lactose crystallization, including the mechanism, theory of crystallization, and the impact of various factors affecting the crystallization kinetics. In addition, an overview of the industrial crystallization operation highlights the problems faced by the lactose manufacturer. The approaches that are beneficial to the lactose manufacturer for process optimization or improvement are summarized in this review. Over the years, much knowledge has been acquired through extensive research. However, the industrial crystallization process is still far from optimized. Therefore, future effort should focus on transferring the new knowledge and technology to the dairy industry. © 2014 Institute of Food Technologists®

  12. Piezoelectric Resonance Defined High Performance Sensors and Modulators

    DTIC Science & Technology

    2016-05-30

    Lopez-Ribot, Amar S. Bhalla, Melissa Montes, Ruyan Guo. Properties of Silver and Copper Nanoparticle Containing Aqueous Suspensions and Evaluation of...Amar S. Bhalla, Ruyan Guo, “Properties of Silver and Copper Nanoparticle - Containing Aqueous Solutions and Their Anti-Biofilm Effects," (2015)Symposium...Properties of Silver and Copper Nanoparticle -Containing AqueousSolutions and Evaluation of their In Vitro Activity againstCandida albicans and

  13. 40 CFR 80.1603 - Gasoline sulfur standards for refiners and importers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Gasoline sulfur standards for refiners... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur § 80.1603 Gasoline sulfur standards for refiners and importers. (a) Sulfur standards—(1) Annual average standard. (i...

  14. Comparison of Antioxidant Properties of Refined and Whole Wheat Flour and Bread

    PubMed Central

    Yu, Lilei; Nanguet, Anne-Laure; Beta, Trust

    2013-01-01

    Antioxidant properties of refined and whole wheat flour and their resultant bread were investigated to document the effects of baking. Total phenolic content (TPC), 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity and oxygen radical absorbance capacity (ORAC) were employed to determine the content of ethanol extractable phenolic compounds. HPLC was used to detect the presence of phenolic acids prior to their confirmation using LC-MS/MS. Whole wheat flour showed significantly higher antioxidant activity than refined flour (p < 0.05). There was a significant effect of the bread-making process with the TPC of whole wheat bread (1.50–1.65 mg/g) and white bread (0.79–1.03 mg/g) showing a respective reduction of 28% and 33% of the levels found in whole wheat and refined flour. Similarly, baking decreased DPPH radical scavenging capacity by 32% and 30%. ORAC values, however, indicated that baking increased the antioxidant activities of whole wheat and refined flour by 1.8 and 2.9 times, respectively. HPLC analysis showed an increase of 18% to 35% in ferulic acid after baking to obtain whole and refined wheat bread containing 330.1 and 25.3 µg/g (average), respectively. Whole wheat flour and bread were superior to refined flour and bread in in vitro antioxidant properties. PMID:26784470

  15. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  16. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    NASA Astrophysics Data System (ADS)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  17. Effects of varying refiner pressure on the machanical properties of loblolly pine fibres

    Treesearch

    Les Groom; Timothy Rials; Rebecca Snell

    2000-01-01

    Loblolly pine chips, separated into mature and juvenile portions, were refined at three pressures (4, 8, and 12 bar) in a single disc refiner at the BioComposites Centre. Fibres were dried in a flash drier to a moisture content of approximately 12 percent. The mechanical properties of single fibres from each refining pressure were determined using a tensile strength...

  18. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI.

    PubMed

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-03-19

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning.

  19. A heuristic re-mapping algorithm reducing inter-level communication in SAMR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steensland, Johan; Ray, Jaideep

    2003-07-01

    This paper aims at decreasing execution time for large-scale structured adaptive mesh refinement (SAMR) applications by proposing a new heuristic re-mapping algorithm and experimentally showing its effectiveness in reducing inter-level communication. Tests were done for five different SAMR applications. The overall goal is to engineer a dynamically adaptive meta-partitioner capable of selecting and configuring the most appropriate partitioning strategy at run-time based on current system and application state. Such a metapartitioner can significantly reduce execution times for general SAMR applications. Computer simulations of physical phenomena are becoming increasingly popular as they constitute an important complement to real-life testing. In manymore » cases, such simulations are based on solving partial differential equations by numerical methods. Adaptive methods are crucial to efficiently utilize computer resources such as memory and CPU. But even with adaption, the simulations are computationally demanding and yield huge data sets. Thus parallelization and the efficient partitioning of data become issues of utmost importance. Adaption causes the workload to change dynamically, calling for dynamic (re-) partitioning to maintain efficient resource utilization. The proposed heuristic algorithm reduced inter-level communication substantially. Since the complexity of the proposed algorithm is low, this decrease comes at a relatively low cost. As a consequence, we draw the conclusion that the proposed re-mapping algorithm would be useful to lower overall execution times for many large SAMR applications. Due to its usefulness and its parameterization, the proposed algorithm would constitute a natural and important component of the meta-partitioner.« less

  20. Refined carbohydrate intake in relation to non-verbal intelligence among Tehrani schoolchildren.

    PubMed

    Abargouei, Amin Salehi; Kalantari, Naser; Omidvar, Nasrin; Rashidkhani, Bahram; Rad, Anahita Houshiar; Ebrahimi, Azizeh Afkham; Khosravi-Boroujeni, Hossein; Esmaillzadeh, Ahmad

    2012-10-01

    Nutrition has long been considered one of the most important environmental factors affecting human intelligence. Although carbohydrates are the most widely studied nutrient for their possible effects on cognition, limited data are available linking usual refined carbohydrate intake and intelligence. The present study was conducted to examine the relationship between long-term refined carbohydrate intake and non-verbal intelligence among schoolchildren. Cross-sectional study. Tehran, Iran. In this cross-sectional study, 245 students aged 6-7 years were selected from 129 elementary schools in two western regions of Tehran. Anthropometric measurements were carried out. Non-verbal intelligence and refined carbohydrate consumption were determined using Raven's Standard Progressive Matrices test and a modified sixty-seven-item FFQ, respectively. Data about potential confounding variables were collected. Linear regression analysis was applied to examine the relationship between non-verbal intelligence scores and refined carbohydrate consumption. Individuals in top tertile of refined carbohydrate intake had lower mean non-verbal intelligence scores in the crude model (P < 0.038). This association remained significant after controlling for age, gender, birth date, birth order and breast-feeding pattern (P = 0.045). However, further adjustments for mother's age, mother's education, father's education, parental occupation and BMI made the association statistically non-significant. We found a significant inverse association between refined carbohydrate consumption and non-verbal intelligence scores in regression models (β = -11.359, P < 0.001). This relationship remained significant in multivariate analysis after controlling for potential confounders (β = -8.495, P = 0.038). The study provides evidence indicating an inverse relationship between refined carbohydrate consumption and non-verbal intelligence among Tehrani children aged 6-7 years. Prospective studies are needed

  1. Validating neural-network refinements of nuclear mass models

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  2. Theory of a refined earth model

    NASA Technical Reports Server (NTRS)

    Krause, H. G. L.

    1968-01-01

    Refined equations are derived relating the variations of the earths gravity and radius as functions of longitude and latitude. They particularly relate the oblateness coefficients of the old harmonics and the difference of the polar radii /respectively, ellipticities and polar gravity accelerations/ in the Northern and Southern Hemispheres.

  3. 76 FR 61472 - Revised Fiscal Year 2011 Tariff-Rate Quota Allocations for Refined Sugar

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-04

    ... Allocations for Refined Sugar AGENCY: Office of the United States Trade Representative. ACTION: Notice... (TRQ) for imported refined sugar for entry through November 30, 2011. DATES: Effective Date: October 4... States maintains a tariff-rate quota for imports of refined sugar. Section 404(d)(3) of the Uruguay Round...

  4. 3Drefine: an interactive web server for efficient protein structure refinement.

    PubMed

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Sampling-Based Motion Planning Algorithms for Replanning and Spatial Load Balancing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boardman, Beth Leigh

    The common theme of this dissertation is sampling-based motion planning with the two key contributions being in the area of replanning and spatial load balancing for robotic systems. Here, we begin by recalling two sampling-based motion planners: the asymptotically optimal rapidly-exploring random tree (RRT*), and the asymptotically optimal probabilistic roadmap (PRM*). We also provide a brief background on collision cones and the Distributed Reactive Collision Avoidance (DRCA) algorithm. The next four chapters detail novel contributions for motion replanning in environments with unexpected static obstacles, for multi-agent collision avoidance, and spatial load balancing. First, we show improved performance of the RRT*more » when using the proposed Grandparent-Connection (GP) or Focused-Refinement (FR) algorithms. Next, the Goal Tree algorithm for replanning with unexpected static obstacles is detailed and proven to be asymptotically optimal. A multi-agent collision avoidance problem in obstacle environments is approached via the RRT*, leading to the novel Sampling-Based Collision Avoidance (SBCA) algorithm. The SBCA algorithm is proven to guarantee collision free trajectories for all of the agents, even when subject to uncertainties in the knowledge of the other agents’ positions and velocities. Given that a solution exists, we prove that livelocks and deadlock will lead to the cost to the goal being decreased. We introduce a new deconfliction maneuver that decreases the cost-to-come at each step. This new maneuver removes the possibility of livelocks and allows a result to be formed that proves convergence to the goal configurations. Finally, we present a limited range Graph-based Spatial Load Balancing (GSLB) algorithm which fairly divides a non-convex space among multiple agents that are subject to differential constraints and have a limited travel distance. The GSLB is proven to converge to a solution when maximizing the area covered by the agents. The

  6. An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images

    NASA Astrophysics Data System (ADS)

    Chen, Qi; Wang, Shugen; Liu, Xiuguo

    2016-06-01

    Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.

  7. 40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... eligible for the hardship provisions for small refiners: (a) A refiner with one or more refineries built... employees or crude capacity is due to operational changes at the refinery or a company sale or... refinery processing units. (e)(1) A small refiner approved under § 80.1340 that subsequently ceases...

  8. SFESA: a web server for pairwise alignment refinement by secondary structure shifts.

    PubMed

    Tong, Jing; Pei, Jimin; Grishin, Nick V

    2015-09-03

    Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.

  9. Reintroducing electrostatics into macromolecular crystallographic refinement: application to neutron crystallography and DNA hydration.

    PubMed

    Fenn, Timothy D; Schnieders, Michael J; Mustyakimov, Marat; Wu, Chuanjie; Langan, Paul; Pande, Vijay S; Brunger, Axel T

    2011-04-13

    Most current crystallographic structure refinements augment the diffraction data with a priori information consisting of bond, angle, dihedral, planarity restraints, and atomic repulsion based on the Pauli exclusion principle. Yet, electrostatics and van der Waals attraction are physical forces that provide additional a priori information. Here, we assess the inclusion of electrostatics for the force field used for all-atom (including hydrogen) joint neutron/X-ray refinement. Two DNA and a protein crystal structure were refined against joint neutron/X-ray diffraction data sets using force fields without electrostatics or with electrostatics. Hydrogen-bond orientation/geometry favors the inclusion of electrostatics. Refinement of Z-DNA with electrostatics leads to a hypothesis for the entropic stabilization of Z-DNA that may partly explain the thermodynamics of converting the B form of DNA to its Z form. Thus, inclusion of electrostatics assists joint neutron/X-ray refinements, especially for placing and orienting hydrogen atoms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Reintroducing Electrostatics into Macromolecular Crystallographic Refinement: Application to Neutron Crystallography and DNA Hydration

    PubMed Central

    Fenn, Timothy D.; Schnieders, Michael J.; Mustyakimov, Marat; Wu, Chuanjie; Langan, Paul; Pande, Vijay S.; Brunger, Axel T.

    2011-01-01

    Summary Most current crystallographic structure refinements augment the diffraction data with a priori information consisting of bond, angle, dihedral, planarity restraints and atomic repulsion based on the Pauli exclusion principle. Yet, electrostatics and van der Waals attraction are physical forces that provide additional a priori information. Here we assess the inclusion of electrostatics for the force field used for all-atom (including hydrogen) joint neutron/X-ray refinement. Two DNA and a protein crystal structure were refined against joint neutron/X-ray diffraction data sets using force fields without electrostatics or with electrostatics. Hydrogen bond orientation/geometry favors the inclusion of electrostatics. Refinement of Z-DNA with electrostatics leads to a hypothesis for the entropic stabilization of Z-DNA that may partly explain the thermodynamics of converting the B form of DNA to its Z form. Thus, inclusion of electrostatics assists joint neutron/X-ray refinements, especially for placing and orienting hydrogen atoms. PMID:21481775

  11. Laser Vacuum Furnace for Zone Refining

    NASA Technical Reports Server (NTRS)

    Griner, D. B.; Zurburg, F. W.; Penn, W. M.

    1986-01-01

    Laser beam scanned to produce moving melt zone. Experimental laser vacuum furnace scans crystalline wafer with high-power CO2-laser beam to generate precise melt zone with precise control of temperature gradients around zone. Intended for zone refining of silicon or other semiconductors in low gravity, apparatus used in normal gravity.

  12. Stereo matching using census cost over cross window and segmentation-based disparity refinement

    NASA Astrophysics Data System (ADS)

    Li, Qingwu; Ni, Jinyan; Ma, Yunpeng; Xu, Jinxin

    2018-03-01

    Stereo matching is a vital requirement for many applications, such as three-dimensional (3-D) reconstruction, robot navigation, object detection, and industrial measurement. To improve the practicability of stereo matching, a method using census cost over cross window and segmentation-based disparity refinement is proposed. First, a cross window is obtained using distance difference and intensity similarity in binocular images. Census cost over the cross window and color cost are combined as the matching cost, which is aggregated by the guided filter. Then, winner-takes-all strategy is used to calculate the initial disparities. Second, a graph-based segmentation method is combined with color and edge information to achieve moderate under-segmentation. The segmented regions are classified into reliable regions and unreliable regions by consistency checking. Finally, the two regions are optimized by plane fitting and propagation, respectively, to match the ambiguous pixels. The experimental results are on Middlebury Stereo Datasets, which show that the proposed method has good performance in occluded and discontinuous regions, and it obtains smoother disparity maps with a lower average matching error rate compared with other algorithms.

  13. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  14. Sublattice parallel replica dynamics.

    PubMed

    Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  15. Refined open intersection numbers and the Kontsevich-Penner matrix model

    NASA Astrophysics Data System (ADS)

    Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.

    2017-03-01

    A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.

  16. Separation of Lead from Crude Antimony by Pyro-Refining Process with NaPO3 Addition

    NASA Astrophysics Data System (ADS)

    Ye, Longgang; Hu, Yuejie; Xia, Zhimei; Chen, Yongming

    2016-06-01

    The main purpose of this study was to separate lead from crude antimony through an oxidation pyro-refining process and by using sodium metaphosphate as a lead elimination reagent. The process parameters that will affect the refining results were optimized experimentally under controlled conditions, such as the sodium metaphosphate charging dosage, the refining temperature and duration, and the air flow rate, to determine their effect on the lead content in refined antimony and the lead removal rate. A minimum lead content of 0.0522 wt.% and a 98.6% lead removal rate were obtained under the following optimal conditions: W_{{{NaPO}_{{3}} }} = 15% W Sb (where W represents weight), a refining temperature of 800°C, a refining time of 30 min, and an air flow rate of 3 L/min. X-ray diffractometry and scanning electron microscopy showed that high-purity antimony was obtained. The smelting operation is free from smoke or ammonia pollution when using monobasic sodium phosphate or ammonium dihydrogen phosphate as the lead elimination reagent. However, this refining process can also remove a certain amount of sulfur, cobalt, and silicon simultaneously, and smelting results also suggest that sodium metaphosphate can be used as a potential lead elimination reagent for bismuth and copper refining.

  17. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  18. VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN PROGRESS. "ARM & HAMMER BAKING SODA WAS MADE HERE FOR OVER 50 YEARS AND THEN SHIPPED ACROSS THE STREET TO THE CHURCH & DWIGHT PLANT ON WILLIS AVE. (ON THE RIGHT IN THIS PHOTO). LAYING ON THE GROUND IN FRONT OF C&D BUILDING IS PART OF AN RBC DRYING TOWER. - Solvay Process Company, Refined Bicarbonate Building, Between Willis & Milton Avenues, Solvay, Onondaga County, NY

  19. Refined gradient theory of scale-dependent superthin rods

    NASA Astrophysics Data System (ADS)

    Lurie, S. A.; Kuznetsova, E. L.; Rabinskii, L. N.; Popova, E. I.

    2015-03-01

    A version of the refined nonclassical theory of thin beams whose thickness is comparable with the scale characteristic of the material structure is constructed on the basis of the gradient theory of elasticity which, in contrast to the classical theory, contains some additional physical characteristics depending on the structure scale parameters and is therefore most appropriate for modeling the strains of scale-dependent systems. The fundamental conditions for the well-posedness of the gradient theories are obtained for the first time, and it is shown that some of the known applied gradient theories do not generally satisfy the well-posedness criterion. A version of the well-posed gradient strain theory which satisfies the symmetry condition is proposed. The well-posed gradient theory is then used to implement the method of kinematic hypotheses for constructing a refined theory of scale-dependent beams. The equilibrium equations of the refined theory of scale-dependent Timoshenko and Bernoulli beams are obtained. It is shown that the scale effects are localized near the beam ends, and therefore, taking the scale effects into account does not give any correction to the bending rigidity of long beams as noted in the previously published papers dealing with the scale-dependent beams.

  20. A cellular automaton - finite volume method for the simulation of dendritic and eutectic growth in binary alloys using an adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar

    2017-11-01

    A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.

  1. Refinement of NMR structures using implicit solvent and advanced sampling techniques.

    PubMed

    Chen, Jianhan; Im, Wonpil; Brooks, Charles L

    2004-12-15

    NMR biomolecular structure calculations exploit simulated annealing methods for conformational sampling and require a relatively high level of redundancy in the experimental restraints to determine quality three-dimensional structures. Recent advances in generalized Born (GB) implicit solvent models should make it possible to combine information from both experimental measurements and accurate empirical force fields to improve the quality of NMR-derived structures. In this paper, we study the influence of implicit solvent on the refinement of protein NMR structures and identify an optimal protocol of utilizing these improved force fields. To do so, we carry out structure refinement experiments for model proteins with published NMR structures using full NMR restraints and subsets of them. We also investigate the application of advanced sampling techniques to NMR structure refinement. Similar to the observations of Xia et al. (J.Biomol. NMR 2002, 22, 317-331), we find that the impact of implicit solvent is rather small when there is a sufficient number of experimental restraints (such as in the final stage of NMR structure determination), whether implicit solvent is used throughout the calculation or only in the final refinement step. The application of advanced sampling techniques also seems to have minimal impact in this case. However, when the experimental data are limited, we demonstrate that refinement with implicit solvent can substantially improve the quality of the structures. In particular, when combined with an advanced sampling technique, the replica exchange (REX) method, near-native structures can be rapidly moved toward the native basin. The REX method provides both enhanced sampling and automatic selection of the most native-like (lowest energy) structures. An optimal protocol based on our studies first generates an ensemble of initial structures that maximally satisfy the available experimental data with conventional NMR software using a simplified

  2. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  3. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    PubMed Central

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  4. An interaction algorithm for prediction of mean and fluctuating velocities in two-dimensional aerodynamic wake flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Orzechowski, J. A.

    1980-01-01

    A theoretical analysis is presented yielding sets of partial differential equations for determination of turbulent aerodynamic flowfields in the vicinity of an airfoil trailing edge. A four phase interaction algorithm is derived to complete the analysis. Following input, the first computational phase is an elementary viscous corrected two dimensional potential flow solution yielding an estimate of the inviscid-flow induced pressure distribution. Phase C involves solution of the turbulent two dimensional boundary layer equations over the trailing edge, with transition to a two dimensional parabolic Navier-Stokes equation system describing the near-wake merging of the upper and lower surface boundary layers. An iteration provides refinement of the potential flow induced pressure coupling to the viscous flow solutions. The final phase is a complete two dimensional Navier-Stokes analysis of the wake flow in the vicinity of a blunt-bases airfoil. A finite element numerical algorithm is presented which is applicable to solution of all partial differential equation sets of inviscid-viscous aerodynamic interaction algorithm. Numerical results are discussed.

  5. Minimally refined biomass fuel

    DOEpatents

    Pearson, Richard K.; Hirschfeld, Tomas B.

    1984-01-01

    A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.

  6. 75 FR 15725 - Termination of Royalty-in-Kind (RIK) Eligible Refiner Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-30

    ... DEPARTMENT OF THE INTERIOR Minerals Management Service [Docket No. MMS-2009-MRM-0014] Termination of Royalty-in-Kind (RIK) Eligible Refiner Program AGENCY: Minerals Management Service, Interior. ACTION: Advance notice for the termination of the RIK Eligible Refiner Program. SUMMARY: On behalf of the...

  7. 40 CFR 80.225 - What is the definition of a small refiner?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... refiner? 80.225 Section 80.225 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR..., the refiner shall include the employees and crude capacity of any subsidiary companies, any parent company and subsidiaries of the parent company, and any joint venture partners. A subsidiary under this...

  8. 40 CFR 80.225 - What is the definition of a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... refiner? 80.225 Section 80.225 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR..., the refiner shall include the employees and crude capacity of any subsidiary companies, any parent company and subsidiaries of the parent company, and any joint venture partners. A subsidiary under this...

  9. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  10. Decadal climate prediction with a refined anomaly initialisation approach

    NASA Astrophysics Data System (ADS)

    Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.; Hawkins, Ed; Nichols, Nancy K.

    2017-03-01

    In decadal prediction, the objective is to exploit both the sources of predictability from the external radiative forcings and from the internal variability to provide the best possible climate information for the next decade. Predicting the climate system internal variability relies on initialising the climate model from observational estimates. We present a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following key innovations: (1) the use of a weight applied to the observed anomalies, in order to avoid the risk of introducing anomalies recorded in the observed climate, whose amplitude does not fit in the range of the internal variability generated by the model; (2) the AI of the ocean density, instead of calculating it from the anomaly initialised state of temperature and salinity. An experiment initialised with this refined AI method has been compared with a full field and standard AI experiment. Results show that the use of such refinements enhances the surface temperature skill over part of the North and South Atlantic, part of the South Pacific and the Mediterranean Sea for the first forecast year. However, part of such improvement is lost in the following forecast years. For the tropical Pacific surface temperature, the full field initialised experiment performs the best. The prediction of the Arctic sea-ice volume is improved by the refined AI method for the first three forecast years and the skill of the Atlantic multidecadal oscillation is significantly increased compared to a non-initialised forecast, along the whole forecast time.

  11. Fixed mesh refinement in the characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Barreto, W.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2017-08-01

    We implement a spatially fixed mesh refinement under spherical symmetry for the characteristic formulation of General Relativity. The Courant-Friedrich-Levy condition lets us deploy an adaptive resolution in (retarded-like) time, even for the nonlinear regime. As test cases, we replicate the main features of the gravitational critical behavior and the spacetime structure at null infinity using the Bondi mass and the News function. Additionally, we obtain the global energy conservation for an extreme situation, i.e. in the threshold of the black hole formation. In principle, the calibrated code can be used in conjunction with an ADM 3+1 code to confirm the critical behavior recently reported in the gravitational collapse of a massless scalar field in an asymptotic anti-de Sitter spacetime. For the scenarios studied, the fixed mesh refinement offers improved runtime and results comparable to code without mesh refinement.

  12. Overview: Application of heterogeneous nucleation in grain-refining of metals.

    PubMed

    Greer, A L

    2016-12-07

    In all of metallurgical processing, probably the most prominent example of nucleation control is the "inoculation" of melts to suppress columnar solidification and to obtain fine equiaxed grain structures in the as-cast solid. In inoculation, a master alloy is added to the melt to increase its solute content and to add stable particles that can act as nucleants for solid grains. This is important for alloys of many metals, and in other cases such as ice nucleation in living systems, but inoculation of aluminum alloys using Al-5Ti-1B (wt.%) master alloy is the exemplar. The key elements are (i) that the chemical interactions between nucleant TiB 2 particles and the melt ensure that the solid phase (α-Al) exists on the surface of the particles even above the liquidus temperature of the melt, (ii) that these perfect nucleants can initiate grains only when the barrier for free growth of α-Al is surmounted, and (iii) that (depending on whether the melt is spatially isothermal or not) the release of latent heat, or the limited extent of constitutional supercooling, can act to limit the number of grains that is initiated and therefore the degree of grain refinement that can be achieved. We review recent studies that contribute to better understanding, and improvement, of grain refinement in general. We also identify priorities for future research. These include the study of the effects of nanophase dispersions in melts. Preliminary studies show that such dispersions may be especially effective in achieving grain refinement, and raise many questions about the underlying mechanisms. The stimulation of icosahedral short-range ordering in the liquid has been shown to lead to grain refinement, and is a further priority for study, especially as the refinement can be achieved with only minor additions of solute.

  13. Overview: Application of heterogeneous nucleation in grain-refining of metals

    NASA Astrophysics Data System (ADS)

    Greer, A. L.

    2016-12-01

    In all of metallurgical processing, probably the most prominent example of nucleation control is the "inoculation" of melts to suppress columnar solidification and to obtain fine equiaxed grain structures in the as-cast solid. In inoculation, a master alloy is added to the melt to increase its solute content and to add stable particles that can act as nucleants for solid grains. This is important for alloys of many metals, and in other cases such as ice nucleation in living systems, but inoculation of aluminum alloys using Al-5Ti-1B (wt.%) master alloy is the exemplar. The key elements are (i) that the chemical interactions between nucleant TiB2 particles and the melt ensure that the solid phase (α-Al) exists on the surface of the particles even above the liquidus temperature of the melt, (ii) that these perfect nucleants can initiate grains only when the barrier for free growth of α-Al is surmounted, and (iii) that (depending on whether the melt is spatially isothermal or not) the release of latent heat, or the limited extent of constitutional supercooling, can act to limit the number of grains that is initiated and therefore the degree of grain refinement that can be achieved. We review recent studies that contribute to better understanding, and improvement, of grain refinement in general. We also identify priorities for future research. These include the study of the effects of nanophase dispersions in melts. Preliminary studies show that such dispersions may be especially effective in achieving grain refinement, and raise many questions about the underlying mechanisms. The stimulation of icosahedral short-range ordering in the liquid has been shown to lead to grain refinement, and is a further priority for study, especially as the refinement can be achieved with only minor additions of solute.

  14. COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES

    PubMed Central

    Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus

    2017-01-01

    The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875

  15. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of

  16. KoBaMIN: a knowledge-based minimization web server for protein structure refinement.

    PubMed

    Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav

    2012-07-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.

  17. A Crowd-Sourcing Indoor Localization Algorithm via Optical Camera on a Smartphone Assisted by Wi-Fi Fingerprint RSSI

    PubMed Central

    Chen, Wei; Wang, Weiping; Li, Qun; Chang, Qiang; Hou, Hongtao

    2016-01-01

    Indoor positioning based on existing Wi-Fi fingerprints is becoming more and more common. Unfortunately, the Wi-Fi fingerprint is susceptible to multiple path interferences, signal attenuation, and environmental changes, which leads to low accuracy. Meanwhile, with the recent advances in charge-coupled device (CCD) technologies and the processing speed of smartphones, indoor positioning using the optical camera on a smartphone has become an attractive research topic; however, the major challenge is its high computational complexity; as a result, real-time positioning cannot be achieved. In this paper we introduce a crowd-sourcing indoor localization algorithm via an optical camera and orientation sensor on a smartphone to address these issues. First, we use Wi-Fi fingerprint based on the K Weighted Nearest Neighbor (KWNN) algorithm to make a coarse estimation. Second, we adopt a mean-weighted exponent algorithm to fuse optical image features and orientation sensor data as well as KWNN in the smartphone to refine the result. Furthermore, a crowd-sourcing approach is utilized to update and supplement the positioning database. We perform several experiments comparing our approach with other positioning algorithms on a common smartphone to evaluate the performance of the proposed sensor-calibrated algorithm, and the results demonstrate that the proposed algorithm could significantly improve accuracy, stability, and applicability of positioning. PMID:27007379

  18. Refinements in the Los Alamos model of the prompt fission neutron spectrum

    DOE PAGES

    Madland, D. G.; Kahler, A. C.

    2017-01-01

    This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less

  19. A Comparison of the Behaviour of AlTiB and AlTiC Grain Refiners

    NASA Astrophysics Data System (ADS)

    Schneider, W.; Kearns, M. A.; McGarry, M. J.; Whitehead, A. J.

    AlTiC master alloys present a new alternative to AlTiB grain refiners which have enjoyed pre-eminence in cast houses for several decades. Recent investigations have shown that, under defined casting conditions, AlTiC is a more efficient grain refiner than AlTiB, is less prone to agglomeration and is more resistant to poisoning by Zr, Cr. Moreover it is observed that there are differences in the mechanism of grain refinement for the different alloys. This paper describes the influence of melt temperature and addition rate on the performance of both types of grain refiner in DC casting tests on different wrought alloys. Furthermore the effects of combined additions of the grain refiners and the recycling behaviour of the treated alloys are presented. Results are compared with laboratory test data. Finally, mechanisms of grain refinement are discussed which are consistent with the observed differences in behaviour with AlTiC and AlTiB.

  20. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a

  1. Segmental Refinement: A Multigrid Technique for Data Locality

    DOE PAGES

    Adams, Mark F.; Brown, Jed; Knepley, Matt; ...

    2016-08-04

    In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less

  2. 40 CFR 80.553 - Under what conditions may the small refiner gasoline sulfur standards be extended for a small...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? 80.553... small refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? (a) A refiner that has been approved by EPA for small refiner gasoline sulfur standards under § 80.240...

  3. 40 CFR 80.553 - Under what conditions may the small refiner gasoline sulfur standards be extended for a small...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? 80.553... small refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? (a) A refiner that has been approved by EPA for small refiner gasoline sulfur standards under § 80.240...

  4. 40 CFR 80.240 - What are the small refiner gasoline sulfur standards?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What are the small refiner gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Hardship Provisions § 80.240 What are the small refiner gasoline sulfur standards? (a) The gasoline sulfur standards...

  5. 40 CFR 80.240 - What are the small refiner gasoline sulfur standards?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the small refiner gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Hardship Provisions § 80.240 What are the small refiner gasoline sulfur standards? (a) The gasoline sulfur standards...

  6. 40 CFR 80.240 - What are the small refiner gasoline sulfur standards?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false What are the small refiner gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Hardship Provisions § 80.240 What are the small refiner gasoline sulfur standards? (a) The gasoline sulfur standards...

  7. 40 CFR 80.240 - What are the small refiner gasoline sulfur standards?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false What are the small refiner gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Hardship Provisions § 80.240 What are the small refiner gasoline sulfur standards? (a) The gasoline sulfur standards...

  8. 40 CFR 80.240 - What are the small refiner gasoline sulfur standards?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What are the small refiner gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Hardship Provisions § 80.240 What are the small refiner gasoline sulfur standards? (a) The gasoline sulfur standards...

  9. 40 CFR 80.554 - What compliance options are available to NRLM diesel fuel small refiners?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to NRLM diesel fuel small refiners? 80.554 Section 80.554 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.554 What compliance options are available to NRLM diesel fuel small refiners? (a) Option 1: A...

  10. 40 CFR 80.554 - What compliance options are available to NRLM diesel fuel small refiners?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... to NRLM diesel fuel small refiners? 80.554 Section 80.554 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.554 What compliance options are available to NRLM diesel fuel small refiners? (a) Option 1: A...

  11. 76 FR 61074 - USDA Increases the Fiscal Year 2011 Tariff-Rate Quota for Refined Sugar

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-03

    ... Quota for Refined Sugar AGENCY: Office of the Secretary, USDA. ACTION: Notice. SUMMARY: The Secretary of Agriculture today announced an increase in the fiscal year (FY) 2011 refined sugar tariff-rate quota (TRQ) of... INFORMATION: A quantity of 22,000 MTRV for sugars, syrups, and molasses (collectively referred to as refined...

  12. PREFMD: a web server for protein structure refinement via molecular dynamics simulations.

    PubMed

    Heo, Lim; Feig, Michael

    2018-03-15

    Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.

  13. Pilot Assessment of Brain Metabolism in Perinatally HIV-Infected Youths Using Accelerated 5D Echo Planar J-Resolved Spectroscopic Imaging.

    PubMed

    Iqbal, Zohaib; Wilson, Neil E; Keller, Margaret A; Michalik, David E; Church, Joseph A; Nielsen-Saines, Karin; Deville, Jaime; Souza, Raissa; Brecht, Mary-Lynn; Thomas, M Albert

    2016-01-01

    To measure cerebral metabolite levels in perinatally HIV-infected youths and healthy controls using the accelerated five dimensional (5D) echo planar J-resolved spectroscopic imaging (EP-JRESI) sequence, which is capable of obtaining two dimensional (2D) J-resolved spectra from three spatial dimensions (3D). After acquisition and reconstruction of the 5D EP-JRESI data, T1-weighted MRIs were used to classify brain regions of interest for HIV patients and healthy controls: right frontal white (FW), medial frontal gray (FG), right basal ganglia (BG), right occipital white (OW), and medial occipital gray (OG). From these locations, respective J-resolved and TE-averaged spectra were extracted and fit using two different quantitation methods. The J-resolved spectra were fit using prior knowledge fitting (ProFit) while the TE-averaged spectra were fit using the advanced method for accurate robust and efficient spectral fitting (AMARES). Quantitation of the 5D EP-JRESI data using the ProFit algorithm yielded significant metabolic differences in two spatial locations of the perinatally HIV-infected youths compared to controls: elevated NAA/(Cr+Ch) in the FW and elevated Asp/(Cr+Ch) in the BG. Using the TE-averaged data quantified by AMARES, an increase of Glu/(Cr+Ch) was shown in the FW region. A strong negative correlation (r < -0.6) was shown between tCh/(Cr+Ch) quantified using ProFit in the FW and CD4 counts. Also, strong positive correlations (r > 0.6) were shown between Asp/(Cr+Ch) and CD4 counts in the FG and BG. The complimentary results using ProFit fitting of J-resolved spectra and AMARES fitting of TE-averaged spectra, which are a subset of the 5D EP-JRESI acquisition, demonstrate an abnormal energy metabolism in the brains of perinatally HIV-infected youths. This may be a result of the HIV pathology and long-term combinational anti-retroviral therapy (cART). Further studies of larger perinatally HIV-infected cohorts are necessary to confirm these findings.

  14. A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering

    PubMed Central

    Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani

    2012-01-01

    Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386

  15. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase-D

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Amin, N.; Fung, J.

    1989-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) was found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach was found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through continued analysis of maximum SJNR met with limited success. A method to acquire and track an incoming laser beam is proposed.

  16. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  17. Bauxite mining and alumina refining: process description and occupational health risks.

    PubMed

    Donoghue, A Michael; Frisch, Neale; Olney, David

    2014-05-01

    To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Review article. The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures.

  18. Predicting the Occurrence of Haze Events in Southeast Asia using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, H. H.; Chulakadabba, A.; Tonks, A.; Yang, Z.; Wang, C.

    2017-12-01

    Severe local- and regional-scale air pollution episodes typically originate from 1) high emissions of air pollutants, 2) poor dispersion conditions, and 3) trans-boundary pollutant transport. Biomass burning activities have become more frequent in Southeast Asia, especially in Sumatra, Borneo, and the mainland Southeast. Trans-boundary transport of biomass burning aerosols often lead to air quality problems in the region. Furthermore, particulate pollutants from human activities besides biomass burning also play an important role in the air quality of Southeast Asia. Singapore, for example, has a dynamic industrial sector including chemical, electric and metallurgic industries, and is the region's major petroleum-refining center. In addition, natural gas and oil power plants, waste incinerators, active port traffic, and a major regional airport further complicate Singapore's air quality issues. In this study, we compare five Machine Learning algorithms: k-Nearest Neighbors, Linear Support Vector Machine, Decision Tree, Random Forest and Artificial Neural Network, to identify haze patterns and determine variable importance. The algorithms were trained using local atmospheric data (i.e. months, atmospheric conditions, wind direction and relative humidity) from three observation stations in Singapore (Changi, Seletar and Paya Labar). We find that the algorithms reveal the associations in data within and between the stations, and provide in-depth interpretation of the haze sources. The algorithms also allow us to predict the probability of haze episodes in Singapore and to determine the correlation between this probability and atmospheric conditions.

  19. Evolution of abdominal wall reconstruction: development of a unified algorithm with improved outcomes.

    PubMed

    Koltz, Peter F; Frey, Jordan D; Bell, Derek E; Girotto, John A; Christiano, Jose G; Langstein, Howard N

    2013-11-01

    Ventral hernia repair (VHR) continues to evolve and now frequently includes some form of component separation (CS) for large defects. To determine the optimal technique for VHR, we evaluated our outcomes before and after we refined and simplified our algorithm for repair. One hundred five consecutive patients undergoing VHR for large midline hernias over 9 years were examined. Patients were divided into those operated on after (group 1) and before (group 2) the institution of our simplified algorithm. Our algorithm emphasizes careful patient selection and a stepwise approach including, but not limited to, bilateral CS if appropriate, preservation of large perforators, retrorectus mesh placement as appropriate, linea alba or midline fascial closure, and vertical panniculectomy. Primary outcomes evaluated included wound infection, dehiscence, and hernia recurrence. Seventy-eight (74.3%) patients underwent repair using our algorithm (group 1), whereas 27 (25.7%) underwent repair before utilization of this algorithm (group 2). Ninety-eight (93.3%) underwent CS, whereas 7 (6.7%) underwent another form of VHR. There was no significant difference in patient age or defect size. The mean follow-up period in days for patients in group 1 and group 2 were 184.02 and 526.06, respectively (P < 0.001). Hernia recurrence in group 1 was 2.6% versus 29.6% in group 2 (P < 0.001). The incidence of wound infection in group 1 was 10.3%, whereas that in group 2 was 33.3% (P < 0.001). The rate of wound dehiscence in group 1 was 17.9% versus 25.9% in group 2 (P < 0.001). Simplifying and unifying our algorithm for VHR, notably with utilization of CS, has yielded improved results. Recurrence and wound healing complications using this approach are favorable compared with published outcomes.

  20. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  1. A conservation and biophysics guided stochastic approach to refining docked multimeric proteins.

    PubMed

    Akbal-Delibas, Bahar; Haspel, Nurit

    2013-01-01

    We introduce a protein docking refinement method that accepts complexes consisting of any number of monomeric units. The method uses a scoring function based on a tight coupling between evolutionary conservation, geometry and physico-chemical interactions. Understanding the role of protein complexes in the basic biology of organisms heavily relies on the detection of protein complexes and their structures. Different computational docking methods are developed for this purpose, however, these methods are often not accurate and their results need to be further refined to improve the geometry and the energy of the resulting complexes. Also, despite the fact that complexes in nature often have more than two monomers, most docking methods focus on dimers since the computational complexity increases exponentially due to the addition of monomeric units. Our results show that the refinement scheme can efficiently handle complexes with more than two monomers by biasing the results towards complexes with native interactions, filtering out false positive results. Our refined complexes have better IRMSDs with respect to the known complexes and lower energies than those initial docked structures. Evolutionary conservation information allows us to bias our results towards possible functional interfaces, and the probabilistic selection scheme helps us to escape local energy minima. We aim to incorporate our refinement method in a larger framework which also enables docking of multimeric complexes given only monomeric structures.

  2. A dual-adaptive support-based stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  3. Effect of Solutes on Grain Refinement of As-Cast Fe-4Si Alloy

    NASA Astrophysics Data System (ADS)

    Li, Ming; Li, Jian-Min; Zheng, Qing; Wang, Geoff; Zhang, Ming-Xing

    2018-06-01

    Grain size is one of the key microstructural factors that control the mechanical properties of steels. The present work aims to extend the theories of grain refinement which were established for cast light alloys to steel systems. Using a designed Fe-4 wt pct Si alloy (all-ferrite structure during whole solidification process), the solute effect on grain refinement/grain coarsening in ferritic systems was comprehensively investigated. Experimental results showed that boron (B), which is associated with the highest Q value (growth restriction factor) in ferrite, significantly refined the as-cast structure of the Fe-4 wt pct Si alloy. Cu and Mo with low Q values had no effect on grain refinement. However, although Y and Zr have relatively high Q values, addition of these two solutes led to grain coarsening in the Fe-4Si alloy. Understanding the results in regards to the growth restriction factor and the driving force for the solidification led to the conclusion that in addition to the grain growth restriction effect, the changes of thermodynamic driving force for solidification due to the solute addition also played a key role in grain refinement in ferritic alloys.

  4. Effect of Solutes on Grain Refinement of As-Cast Fe-4Si Alloy

    NASA Astrophysics Data System (ADS)

    Li, Ming; Li, Jian-Min; Zheng, Qing; Wang, Geoff; Zhang, Ming-Xing

    2018-03-01

    Grain size is one of the key microstructural factors that control the mechanical properties of steels. The present work aims to extend the theories of grain refinement which were established for cast light alloys to steel systems. Using a designed Fe-4 wt pct Si alloy (all-ferrite structure during whole solidification process), the solute effect on grain refinement/grain coarsening in ferritic systems was comprehensively investigated. Experimental results showed that boron (B), which is associated with the highest Q value (growth restriction factor) in ferrite, significantly refined the as-cast structure of the Fe-4 wt pct Si alloy. Cu and Mo with low Q values had no effect on grain refinement. However, although Y and Zr have relatively high Q values, addition of these two solutes led to grain coarsening in the Fe-4Si alloy. Understanding the results in regards to the growth restriction factor and the driving force for the solidification led to the conclusion that in addition to the grain growth restriction effect, the changes of thermodynamic driving force for solidification due to the solute addition also played a key role in grain refinement in ferritic alloys.

  5. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  6. Aspects of Western Refining, Inc.'s Proposed Acquisition of Giant Industries, Inc.

    EIA Publications

    2006-01-01

    Presentation of company-level, non-proprietary data and relevant aggregate data for U.S. refinery capacity and gasoline marketing of Western Refining and Giant Industries to inform discussions of Western Refining Inc.'s proposed acquisition of Giant Industries Inc. for a total of $1.5 billion, which was announced August 28, 2006.

  7. A robust and efficient polyhedron subdivision and intersection algorithm for three-dimensional MMALE remapping

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong; Jia, Zupeng

    2017-06-01

    The Multi-Material Arbitrary Lagrangian Eulerian (MMALE) method is an effective way to simulate the multi-material flow with severe surface deformation. Comparing with the traditional Arbitrary Lagrangian Eulerian (ALE) method, the MMALE method allows for multiple materials in a single cell which overcomes the difficulties in grid refinement process. In recent decades, many researches have been conducted for the Lagrangian, rezoning and surface reconstruction phases, but less attention has been paid to the multi-material remapping phase especially for the three-dimensional problems due to two complex geometric problems: the polyhedron subdivision and the polyhedron intersection. In this paper, we propose a ;Clipping and Projecting; algorithm for polyhedron intersection whose basic idea comes from the commonly used method by Grandy (1999) [29] and Jia et al. (2013) [34]. Our new algorithm solves the geometric problem by an incremental modification of the topology based on segment-plane intersections. A comparison with Jia et al. (2013) [34] shows our new method improves the efficiency by 55% to 65% when calculating polyhedron intersections. Moreover, the instability caused by the geometric degeneracy can be thoroughly avoided because the geometry integrity is preserved in the new algorithm. We also focus on the polyhedron subdivision process and describe an algorithm which could automatically and precisely tackle the various situations including convex, non-convex and multiple subdivisions. Numerical studies indicate that by using our polyhedron subdivision and intersection algorithm, the volume conversation of the remapping phase can be exactly preserved in the MMALE simulation.

  8. Contactless heater floating zone refining and crystal growth

    NASA Technical Reports Server (NTRS)

    Lan, Chung-Wen (Inventor); Kou, Sindo (Inventor)

    1993-01-01

    Floating zone refining or crystal growth is carried out by providing rapid relative rotation of a feed rod and finish rod while providing heat to the junction between the two rods so that significant forced convection occurs in the melt zone between the two rods. The forced convection distributes heat in the melt zone to allow the rods to be melted through with a much shorter melt zone length than possible utilizing conventional floating zone processes. One of the rods can be rotated with respect to the other, or both rods can be counter-rotated, with typical relative rotational speeds of the rods ranging from 200 revolutions per minute (RPM) to 400 RPM or greater. Zone refining or crystal growth is carried out by traversing the melt zone through the feed rod.

  9. 40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...

  10. 40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...

  11. 40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...

  12. 40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...

  13. 40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar...

  14. What is a clinical pathway? Refinement of an operational definition to identify clinical pathway studies for a Cochrane systematic review.

    PubMed

    Lawal, Adegboyega K; Rotter, Thomas; Kinsman, Leigh; Machotta, Andreas; Ronellenfitsch, Ulrich; Scott, Shannon D; Goodridge, Donna; Plishka, Christopher; Groot, Gary

    2016-02-23

    Clinical pathways (CPWs) are a common component in the quest to improve the quality of health. CPWs are used to reduce variation, improve quality of care, and maximize the outcomes for specific groups of patients. An ongoing challenge is the operationalization of a definition of CPW in healthcare. This may be attributable to both the differences in definition and a lack of conceptualization in the field of clinical pathways. This correspondence article describes a process of refinement of an operational definition for CPW research and proposes an operational definition for the future syntheses of CPWs literature. Following the approach proposed by Kinsman et al. (BMC Medicine 8(1):31, 2010) and Wieland et al. (Alternative Therapies in Health and Medicine 17(2):50, 2011), we used a four-stage process to generate a five criteria checklist for the definition of CPWs. We refined the operational definition, through consensus, merging two of the checklist's criteria, leading to a more inclusive criterion for accommodating CPW studies conducted in various healthcare settings. The following four criteria for CPW operational definition, derived from the refinement process described above, are (1) the intervention was a structured multidisciplinary plan of care; (2) the intervention was used to translate guidelines or evidence into local structures; (3) the intervention detailed the steps in a course of treatment or care in a plan, pathway, algorithm, guideline, protocol or other 'inventory of actions' (i.e. the intervention had time-frames or criteria-based progression); and (4) the intervention aimed to standardize care for a specific population. An intervention meeting all four criteria was considered to be a CPW. The development of operational definitions for complex interventions is a useful approach to appraise and synthesize evidence for policy development and quality improvement.

  15. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  16. Comparison of lab, pilot, and industrial scale low consistency mechanical refining for improvements in enzymatic digestibility of pretreated hardwood.

    PubMed

    Jones, Brandon W; Venditti, Richard; Park, Sunkyu; Jameel, Hasan

    2014-09-01

    Mechanical refining has been shown to improve biomass enzymatic digestibility. In this study industrial high-yield sodium carbonate hardwood pulp was subjected to lab, pilot and industrial refining to determine if the mechanical refining improves the enzymatic hydrolysis sugar conversion efficiency differently at different refining scales. Lab, pilot and industrial refining increased the biomass digestibility for lignocellulosic biomass relative to the unrefined material. The sugar conversion was increased from 36% to 65% at 5 FPU/g of biomass with industrial refining at 67.0 kWh/t, which was more energy efficient than lab and pilot scale refining. There is a maximum in the sugar conversion with respect to the amount of refining energy. Water retention value is a good predictor of improvements in sugar conversion for a given fiber source and composition. Improvements in biomass digestibility with refining due to lab, pilot plant and industrial refining were similar with respect to water retention value. Published by Elsevier Ltd.

  17. Border-oriented post-processing refinement on detected vehicle bounding box for ADAS

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Zhang, Zhaoning; Li, Minne; Li, Dongsheng

    2018-04-01

    We investigate a new approach for improving localization accuracy of detected vehicles for object detection in advanced driver assistance systems(ADAS). Specifically, we implement a bounding box refinement as a post-processing of the state-of-the-art object detectors (Faster R-CNN, YOLOv2, etc.). The bounding box refinement is achieved by individually adjusting each border of the detected bounding box to its target location using a regression method. We use HOG features which perform well on the edge detection of vehicles to train the regressor and the regressor is independent of the CNN-based object detectors. Experiment results on the KITTI 2012 benchmark show that we can achieve up to 6% improvements over YOLOv2 and Faster R-CNN object detectors on the IoU threshold of 0.8. Also, the proposed refinement framework is computationally light, allowing for processing one bounding box within a few milliseconds on CPU. Further, this refinement method can be added to any object detectors, especially those with high speed but less accuracy.

  18. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  19. Preliminary application of a novel algorithm to monitor changes in pre-flight total peripheral resistance for prediction of post-flight orthostatic intolerance in astronauts

    NASA Astrophysics Data System (ADS)

    Arai, Tatsuya; Lee, Kichang; Stenger, Michael B.; Platts, Steven H.; Meck, Janice V.; Cohen, Richard J.

    2011-04-01

    Orthostatic intolerance (OI) is a significant challenge for astronauts after long-duration spaceflight. Depending on flight duration, 20-80% of astronauts suffer from post-flight OI, which is associated with reduced vascular resistance. This paper introduces a novel algorithm for continuously monitoring changes in total peripheral resistance (TPR) by processing the peripheral arterial blood pressure (ABP). To validate, we applied our novel mathematical algorithm to the pre-flight ABP data previously recorded from twelve astronauts ten days before launch. The TPR changes were calculated by our algorithm and compared with the TPR value estimated using cardiac output/heart rate before and after phenylephrine administration. The astronauts in the post-flight presyncopal group had lower pre-flight TPR changes (1.66 times) than those in the non-presyncopal group (2.15 times). The trend in TPR changes calculated with our algorithm agreed with the TPR trend calculated using measured cardiac output in the previous study. Further data collection and algorithm refinement are needed for pre-flight detection of OI and monitoring of continuous TPR by analysis of peripheral arterial blood pressure.

  20. Effect of Grain Refining on Defect Formation in DC Cast Al-Zn-Mg-Cu Alloy Billet

    NASA Astrophysics Data System (ADS)

    Nadella, Ravi; Eskin, Dmitry; Katgerman, Laurens

    In direct chill (DC) casting, the effect of grain refining on the prominent defects such as hot cracking and macrosegregation remains poorly understood, especially for multi-component commercial aluminum alloys. In this work, DC casting experiments were conducted on a 7075 alloy with and without grain refining at two casting speeds. The grain refiner was introduced either in the launder or in the furnace. The concentration profiles of Zn, Cu and Mg, measured along the billet diameter, showed that the increasing casting speed raises the segregation levels but grain refining does not seem to have a noticeable effect. However, hot cracking tendency is significantly reduced with grain refining and it is observed that crack is terminated with the introduction of grain refiner at a lower casting speed. These experimental results are correlated with microstructural observations such as grain size and morphology, and the occurrence of floating grains.

  1. Systemic inflammatory response syndrome-based severe sepsis screening algorithms in emergency department patients with suspected sepsis.

    PubMed

    Shetty, Amith L; Brown, Tristam; Booth, Tarra; Van, Kim Linh; Dor-Shiffer, Daphna E; Vaghasiya, Milan R; Eccleston, Cassanne E; Iredell, Jonathan

    2016-06-01

    Systemic inflammatory response syndrome (SIRS)-based severe sepsis screening algorithms have been utilised in stratification and initiation of early broad spectrum antibiotics for patients presenting to EDs with suspected sepsis. We aimed to investigate the performance of some of these algorithms on a cohort of suspected sepsis patients. We conducted a retrospective analysis on an ED-based prospective sepsis registry at a tertiary Sydney hospital, Australia. Definitions for sepsis were based on the 2012 Surviving Sepsis Campaign guidelines. Numerical values for SIRS criteria and ED investigation results were recorded at the trigger of sepsis pathway on the registry. Performance of specific SIRS-based screening algorithms at sites from USA, Canada, UK, Australia and Ireland health institutions were investigated. Severe sepsis screening algorithms' performance was measured on 747 patients presenting with suspected sepsis (401 with severe sepsis, prevalence 53.7%). Sensitivity and specificity of algorithms to flag severe sepsis ranged from 20.2% (95% CI 16.4-24.5%) to 82.3% (95% CI 78.2-85.9%) and 57.8% (95% CI 52.4-63.1%) to 94.8% (95% CI 91.9-96.9%), respectively. Variations in SIRS values between uncomplicated and severe sepsis cohorts were only minor, except a higher mean lactate (>1.6 mmol/L, P < 0.01). We found the Ireland and JFK Medical Center sepsis algorithms performed modestly in stratifying suspected sepsis patients into high-risk groups. Algorithms with lactate levels thresholds of >2 mmol/L rather than >4 mmol/L performed better. ED sepsis registry-based characterisation of patients may help further refine sepsis definitions of the future. © 2016 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  2. Programming Deep Brain Stimulation for Tremor and Dystonia: The Toronto Western Hospital Algorithms.

    PubMed

    Picillo, Marina; Lozano, Andres M; Kou, Nancy; Munhoz, Renato Puppi; Fasano, Alfonso

    2016-01-01

    Deep brain stimulation (DBS) is an effective treatment for essential tremor (ET) and dystonia. After surgery, a number of extensive programming sessions are performed, mainly relying on neurologist's personal experience as no programming guidelines have been provided so far, with the exception of recommendations provided by groups of experts. Finally, fewer information is available for the management of DBS in ET and dystonia compared with Parkinson's disease. Our aim is to review the literature on initial and follow-up DBS programming procedures for ET and dystonia and integrate the results with our current practice at Toronto Western Hospital (TWH) to develop standardized DBS programming protocols. We conducted a literature search of PubMed from inception to July 2014 with the keywords "balance", "bradykinesia", "deep brain stimulation", "dysarthria", "dystonia", "gait disturbances", "initial programming", "loss of benefit", "micrographia", "speech", "speech difficulties" and "tremor". Seventy-six papers were considered for this review. Based on the literature review and our experience at TWH, we refined three algorithms for management of ET, including: (1) initial programming, (2) management of balance and speech issues and (3) loss of stimulation benefit. We also depicted algorithms for the management of dystonia, including: (1) initial programming and (2) management of stimulation-induced hypokinesia (shuffling gait, micrographia and speech impairment). We propose five algorithms tailored to an individualized approach to managing ET and dystonia patients with DBS. We encourage the application of these algorithms to supplement current standards of care in established as well as new DBS centers to test the clinical usefulness of these algorithms in supplementing the current standards of care. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Refining the Eye: Dermatology and Visual Literacy

    ERIC Educational Resources Information Center

    Zimmermann, Corinne; Huang, Jennifer T.; Buzney, Elizabeth A.

    2016-01-01

    In 2014 the Museum of Fine Arts Boston and Harvard Medical School began a partnership focused on building visual literacy skills for dermatology residents in the Harvard Combined Dermatology Residency Program. "Refining the Eye: Art and Dermatology", a four session workshop, took place in the museum's galleries and utilized the Visual…

  4. Knowledge Acquisition, Knowledge Programming, and Knowledge Refinement.

    ERIC Educational Resources Information Center

    Hayes-Roth, Frederick; And Others

    This report describes the principal findings and recommendations of a 2-year Rand research project on machine-aided knowledge acquisition and discusses the transfer of expertise from humans to machines, as well as the functions of planning, debugging, knowledge refinement, and autonomous machine learning. The relative advantages of humans and…

  5. Refinement of the Iowa Self-Assessment Inventory.

    ERIC Educational Resources Information Center

    Morris, Woodrow W.; And Others

    1990-01-01

    Used two samples of older adults (N=1,153; N=420) in refinement of Iowa Self-Assessment Inventory (ISAI). Factor analyses resulted in modification of original 6-scale inventory to inventory of 7 scales: economic resources, anxiety/depression, physical health, alienation, mobility, cognitive status, and social support. The original ISAI was…

  6. Re-refinement from deposited X-ray data can deliver improved models for most PDB entries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joosten, Robbie P.; Womack, Thomas; Vriend, Gert, E-mail: vriend@cmbi.ru.nl

    2009-02-01

    An evaluation of validation and real-space intervention possibilities for improving existing automated (re-)refinement methods. The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation andmore » difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.« less

  7. 77 FR 52021 - Proposed CERCLA Administrative Settlement Agreement and Order on Consent for the Mercury Refining...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... and Order on Consent for the Mercury Refining Superfund Site, Towns of Guilderland and Colonie, Albany... Management of Michigan, Inc. (hereafter ``Settling Parties'') pertaining to the Mercury Refining Superfund... Superfund Mercury Refining Superfund Site Special Account, which combined total $79,028.49. Each Settling...

  8. Solution of nonlinear multivariable constrained systems using a gradient projection digital algorithm that is insensitive to the initial state

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1982-01-01

    Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.

  9. Implementation of a parallel protein structure alignment service on cloud.

    PubMed

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  10. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    PubMed Central

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842

  11. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  12. Implementing technical refinement in high-level athletics: exploring the knowledge schemas of coaches.

    PubMed

    Kearney, Philip E; Carson, Howie J; Collins, Dave

    2018-05-01

    This paper explores the approaches adopted by high-level field athletics coaches when attempting to refine an athlete's already well-established technique (long and triple jump and javelin throwing). Six coaches, who had all coached multiple athletes to multiple major championships, took part in semi-structured interviews focused upon a recent example of technique refinement. Data were analysed using a thematic content analysis. The coaching tools reported were generally consistent with those advised by the existing literature, focusing on attaining "buy-in", utilising part-practice, restoring movement automaticity and securing performance under pressure. Five of the six coaches reported using a systematic sequence of stages to implement the refinement, although the number and content of these stages varied between them. Notably, however, there were no formal sources of knowledge (e.g., coach education or training) provided to inform coaches' decision making. Instead, coaches' decisions were largely based on experience both within and outside the sporting domain. Data offer a useful stimulus for reflection amongst sport practitioners confronted by the problem of technique refinement. Certainly the limited awareness of existing guidelines on technique refinement expressed by the coaches emphasises a need for further collaborative work by researchers and coach educators to disseminate best practice.

  13. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Kinetics of Sub-Micron Grain Size Refinement in 9310 Steel

    NASA Astrophysics Data System (ADS)

    Kozmel, Thomas; Chen, Edward Y.; Chen, Charlie C.; Tin, Sammy

    2014-05-01

    Recent efforts have focused on the development of novel manufacturing processes capable of producing microstructures dominated by sub-micron grains. For structural applications, grain refinement has been shown to enhance mechanical properties such as strength, fatigue resistance, and fracture toughness. Through control of the thermo-mechanical processing parameters, dynamic recrystallization mechanisms were used to produce microstructures consisting of sub-micron grains in 9310 steel. Starting with initial bainitic grain sizes of 40 to 50 μm, various levels of grain refinement were observed following hot deformation of 9310 steel samples at temperatures and strain rates ranging from 755 K to 922 K (482 °C and 649 °C) and 1 to 0.001/s, respectively. The resulting deformation microstructures were characterized using scanning electron microscopy and electron backscatter diffraction techniques to quantify the extent of carbide coarsening and grain refinement occurring during deformation. Microstructural models based on the Zener-Holloman parameter were developed and modified to include the effect of the ferrite/carbide interactions within the system. These models were shown to effectively correlate microstructural attributes to the thermal mechanical processing parameters.

  15. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  16. 40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of its...

  17. 40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of its...

  18. 40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of its...

  19. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  20. Application of Al-2La-1B Grain Refiner to Al-10Si-0.3Mg Casting Alloy

    NASA Astrophysics Data System (ADS)

    Jing, Lijun; Pan, Ye; Lu, Tao; Li, Chenlin; Pi, Jinhong; Sheng, Ningyue

    2018-05-01

    This paper reports the application and microstructure refining effect of an Al-2La-1B grain refiner in Al-10Si-0.3Mg casting alloy. Compared with the traditional Al-5Ti-1B refiner, Al-2La-1B refiner shows better performances on the grain refinement of Al-10Si-0.3Mg alloy. Transmission electron microscopy analysis suggests that the crystallite structure features of LaB6 are beneficial to the heterogeneous nucleation of α-Al grains. Regarding the mechanical performances, tensile properties of Al-10Si-0.3Mg casting alloy are prominently improved, due to the refined microstructures.

  1. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    NASA Astrophysics Data System (ADS)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  2. Ice surface temperature retrieval from AVHRR, ATSR, and passive microwave satellite data: Algorithm development and application

    NASA Technical Reports Server (NTRS)

    Key, Jeff; Maslanik, James; Steffen, Konrad

    1994-01-01

    During the first half of our second project year we have accomplished the following: (1) acquired a new AVHRR data set for the Beaufort Sea area spanning an entire year; (2) acquired additional ATSR data for the Arctic and Antarctic now totaling over seven months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; (6) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and SSM/I; and (7) continued work on compositing GAC data for coverage of the entire Arctic and Antarctic. During the second half of the year we will continue along these same lines, and will undertake a detailed validation study of the AVHRR and ATSR retrievals using LEADEX and the Beaufort Sea year-long data. Cloud masking methods used for the AVHRR will be modified for use with the ATSR. Methods of blending in situ and satellite-derived surface temperature data sets will be investigated.

  3. Comprehensive Data Collected from the Petroleum Refining Sector

    EPA Pesticide Factsheets

    On April 1, 2011 EPA sent a comprehensive industry-wide information collection request (ICR) to all facilities in the U.S. petroleum refining industry. EPA has received this ICR data and compiled these data into databases and spreadsheets for the web

  4. Modeling Island-Growth Capture Zone Distributions (CZD) with the Generalized Wigner Distribution (GWD): New Developments in Theory and Experiment

    NASA Astrophysics Data System (ADS)

    Pimpinelli, Alberto; Einstein, T. L.; González, Diego Luis; Sathiyanarayanan, Rajesh; Hamouda, Ajmi Bh.

    2011-03-01

    Earlier we showed [PRL 99, 226102 (2007)] that the CZD in growth could be well described by P (s) = asβ exp (-bs2) , where s is the CZ area divided by its average value. Painstaking simulations by Amar's [PRE 79, 011602 (2009)] and Evans's [PRL 104, 149601 (2010)] groups showed inadequacies in our mean field Fokker-Planck argument relating β to the critical nucleus size. We refine our derivation to retrieve their β ~ i + 2 [PRL 104, 149602 (2010)]. We discuss applications of this formula and methodology to experiments on Ge/Si(001) and on various organics on Si O2 , as well as to kinetic Monte Carlo studies homoepitaxial growth on Cu(100) with codeposited impurities of different sorts. In contrast to theory, there can be significant changes to β with coverage. Some experiments also show temperature dependence. Supported by NSF-MRSEC at UMD, Grant DMR 05-20471.

  5. Abstraction of complex concepts with a refined partial-area taxonomy of SNOMED

    PubMed Central

    Wang, Yue; Halper, Michael; Wei, Duo; Perl, Yehoshua; Geller, James

    2012-01-01

    An algorithmically-derived abstraction network, called the partial-area taxonomy, for a SNOMED hierarchy has led to the identification of concepts considered complex. The designation “complex” is arrived at automatically on the basis of structural analyses of overlap among the constituent concept groups of the partial-area taxonomy. Such complex concepts, called overlapping concepts, constitute a tangled portion of a hierarchy and can be obstacles to users trying to gain an understanding of the hierarchy’s content. A new methodology for partitioning the entire collection of overlapping concepts into singly-rooted groups, that are more manageable to work with and comprehend, is presented. Different kinds of overlapping concepts with varying degrees of complexity are identified. This leads to an abstract model of the overlapping concepts called the disjoint partial-area taxonomy, which serves as a vehicle for enhanced, high-level display. The methodology is demonstrated with an application to SNOMED’s Specimen hierarchy. Overall, the resulting disjoint partial-area taxonomy offers a refined view of the hierarchy’s structural organization and conceptual content that can aid users, such as maintenance personnel, working with SNOMED. The utility of the disjoint partial-area taxonomy as the basis for a SNOMED auditing regimen is presented in a companion paper. PMID:21878396

  6. Refinement of ground reference data with segmented image data

    NASA Technical Reports Server (NTRS)

    Robinson, Jon W.; Tilton, James C.

    1991-01-01

    One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.

  7. Overview of refinement procedures within REFMAC5: utilizing data from different sources.

    PubMed

    Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N

    2018-03-01

    Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.

  8. Chiral pathways in DNA dinucleotides using gradient optimized refinement along metastable borders

    NASA Astrophysics Data System (ADS)

    Romano, Pablo; Guenza, Marina

    We present a study of DNA breathing fluctuations using Markov state models (MSM) with our novel refinement procedure. MSM have become a favored method of building kinetic models, however their accuracy has always depended on using a significant number of microstates, making the method costly. We present a method which optimizes macrostates by refining borders with respect to the gradient along the free energy surface. As the separation between macrostates contains highest discretization errors, this method corrects for any errors produced by limited microstate sampling. Using our refined MSM methods, we investigate DNA breathing fluctuations, thermally induced conformational changes in native B-form DNA. Running several microsecond MD simulations of DNA dinucleotides of varying sequences, to include sequence and polarity effects, we've analyzed using our refined MSM to investigate conformational pathways inherent in the unstacking of DNA bases. Our kinetic analysis has shown preferential chirality in unstacking pathways that may be critical in how proteins interact with single stranded regions of DNA. These breathing dynamics can help elucidate the connection between conformational changes and key mechanisms within protein-DNA recognition. NSF Chemistry Division (Theoretical Chemistry), the Division of Physics (Condensed Matter: Material Theory), XSEDE.

  9. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  10. Striving for Normalcy after Lower Extremity Reconstruction with Free Tissue: The Role of Secondary Esthetic Refinements.

    PubMed

    Nelson, Jonas A; Fischer, John P; Haddock, Nicholas T; Mackay, Duncan; Wink, Jason D; Newman, Andrew S; Levin, L Scott; Kovach, Stephen J

    2016-02-01

    Many patients with successful lower extremity salvage have postoperative functional and esthetic concerns. Such concerns range from contour irregularity preventing proper shoe-fitting to esthetic concerns involving color, contour, and texture match. The purpose of this study is to determine the overall incidence as well as factors associated with an increased likelihood of undergoing secondary, esthetic refinements of lower extremity free flaps and to review current revision techniques. All patients undergoing lower extremity soft tissue coverage for limb salvage procedures between January 2007 and June 2013 at a single institution were included in the analysis. Patients who underwent secondary refinements for lower extremity free flaps were compared with patients not undergoing secondary procedures. During the study period, 152 patients underwent reconstruction and were eligible for inclusion. Of these, 32 (21.1%) patients underwent secondary, esthetic revisions. Few differences in patient or case characteristics were noted, although revision patients trended toward being younger, having lower body mass index, with defects secondary to acute trauma located below the ankle. The most common revision was complex soft tissue rearrangement or surgical flap debulking/direct excision (87.5% of patients), followed by scar revision (12.5%), suction-assisted lipectomy (3.1%), laser scar revision (3.1%), and tissue expansion with local tissue rearrangement (3.1%). A significant portion of patients desire secondary revisions following the initial procedure. This is especially true of younger patients with below ankle reconstruction. In many patients, an esthetic consideration should not be of secondary concern, but should be part of the ultimate reconstructive algorithm for lower extremity limb salvage. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Evaluating and Refining High Throughput Tools for Toxicokinetics

    EPA Science Inventory

    This poster summarizes efforts of the Chemical Safety for Sustainability's Rapid Exposure and Dosimetry (RED) team to facilitate the development and refinement of toxicokinetics (TK) tools to be used in conjunction with the high throughput toxicity testing data generated as a par...

  12. Precipitation process in a Mg–Gd–Y alloy grain-refined by Al addition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Jichun; CAST Cooperative Research Centre, Department of Materials Engineering, Monash University, Victoria 3800; Zhu, Suming, E-mail: suming.zhu@monash.edu

    2014-02-15

    The precipitation process in Mg–10Gd–3Y (wt.%) alloy grain-refined by 0.8 wt.% Al addition has been investigated by transmission electron microscopy. The alloy was given a solution treatment at 520 °C for 6 h plus 550 °C for 7 h before ageing at 250 °C. Plate-shaped intermetallic particles with the 18R-type long-period stacking ordered structure were observed in the solution-treated state. Upon isothermal ageing at 250 °C, the following precipitation sequence was identified for the α-Mg supersaturated solution: β″ (D0{sub 19}) → β′ (bco) → β{sub 1} (fcc) → β (fcc). The observed precipitation process and age hardening response in themore » Al grain-refined Mg–10Gd–3Y alloy are compared with those reported in the Zr grain-refined counterpart. - Highlights: • The precipitation process in Mg–10Gd–3Y–0.8Al (wt.%) alloy has been investigated. • Particles with the 18R-type LPSO structure were observed in the solution state. • Upon ageing at 250 °C, the precipitation sequence is: β″ → β′ → β1 (fcc) → β. • The Al grain-refined alloy has a lower hardness than the Zr refined counterpart.« less

  13. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known

  14. Automated protein structure modeling in CASP9 by I-TASSER pipeline combined with QUARK-based ab initio folding and FG-MD-based structure refinement

    PubMed Central

    Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang

    2011-01-01

    I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036

  15. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  16. Re-refinement from deposited X-ray data can deliver improved models for most PDB entries.

    PubMed

    Joosten, Robbie P; Womack, Thomas; Vriend, Gert; Bricogne, Gérard

    2009-02-01

    The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation and difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.

  17. Fully automated prostate segmentation in 3D MR based on normalized gradient fields cross-correlation initialization and LOGISMOS refinement

    NASA Astrophysics Data System (ADS)

    Yin, Yin; Fotin, Sergei V.; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter

    2012-02-01

    Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape. Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The refinement model is based on a graph-search based framework, which contains both shape and topology information during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and region-specific classifier training. The proposed algorithm was developed using 261 training images and tested on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40 seconds depending on image size and resolution.

  18. 40 CFR 80.1285 - How does a refiner apply for a benzene baseline?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner apply for a benzene... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1285 How does a refiner apply for a benzene baseline? (a) A benzene baseline...

  19. 40 CFR 80.1285 - How does a refiner apply for a benzene baseline?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How does a refiner apply for a benzene... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1285 How does a refiner apply for a benzene baseline? (a) A benzene baseline...

  20. 40 CFR 80.1285 - How does a refiner apply for a benzene baseline?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false How does a refiner apply for a benzene... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1285 How does a refiner apply for a benzene baseline? (a) A benzene baseline...

  1. 40 CFR 80.1285 - How does a refiner apply for a benzene baseline?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How does a refiner apply for a benzene... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1285 How does a refiner apply for a benzene baseline? (a) A benzene baseline...

  2. 40 CFR 80.1285 - How does a refiner apply for a benzene baseline?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false How does a refiner apply for a benzene... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Averaging, Banking and Trading (abt) Program § 80.1285 How does a refiner apply for a benzene baseline? (a) A benzene baseline...

  3. 40 CFR 80.290 - How does a refiner apply for a sulfur baseline?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false How does a refiner apply for a sulfur... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Averaging, Banking and Trading (abt) Program-General Information § 80.290 How does a refiner apply for a sulfur baseline? (a) The...

  4. 40 CFR 80.290 - How does a refiner apply for a sulfur baseline?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner apply for a sulfur... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Averaging, Banking and Trading (abt) Program-General Information § 80.290 How does a refiner apply for a sulfur baseline? (a) The...

  5. 40 CFR 80.290 - How does a refiner apply for a sulfur baseline?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How does a refiner apply for a sulfur... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Averaging, Banking and Trading (abt) Program-General Information § 80.290 How does a refiner apply for a sulfur baseline? (a) The...

  6. 40 CFR 80.290 - How does a refiner apply for a sulfur baseline?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How does a refiner apply for a sulfur... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Averaging, Banking and Trading (abt) Program-General Information § 80.290 How does a refiner apply for a sulfur baseline? (a) The...

  7. 40 CFR 80.290 - How does a refiner apply for a sulfur baseline?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false How does a refiner apply for a sulfur... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Averaging, Banking and Trading (abt) Program-General Information § 80.290 How does a refiner apply for a sulfur baseline? (a) The...

  8. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase E

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Fung, J.

    1990-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) has been found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach has been found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through further analysis of maximum SJNR met with limited success. The investigation of a new angle detector for spatial tracking in heterodyne laser space communications was completed.

  9. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  10. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    NASA Astrophysics Data System (ADS)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  11. Effective dimensional reduction algorithm for eigenvalue problems for thin elastic structures: A paradigm in three dimensions

    PubMed Central

    Ovtchinnikov, Evgueni E.; Xanthis, Leonidas S.

    2000-01-01

    We present a methodology for the efficient numerical solution of eigenvalue problems of full three-dimensional elasticity for thin elastic structures, such as shells, plates and rods of arbitrary geometry, discretized by the finite element method. Such problems are solved by iterative methods, which, however, are known to suffer from slow convergence or even convergence failure, when the thickness is small. In this paper we show an effective way of resolving this difficulty by invoking a special preconditioning technique associated with the effective dimensional reduction algorithm (EDRA). As an example, we present an algorithm for computing the minimal eigenvalue of a thin elastic plate and we show both theoretically and numerically that it is robust with respect to both the thickness and discretization parameters, i.e. the convergence does not deteriorate with diminishing thickness or mesh refinement. This robustness is sine qua non for the efficient computation of large-scale eigenvalue problems for thin elastic structures. PMID:10655469

  12. AIR EMISSIONS FROM COMBUSTION OF SOLVENT REFINED COAL

    EPA Science Inventory

    The report gives details of a Solvent Refined Coal (SRC) combustion test at Georgia Power Company's Plant Mitchell, March, May, and June 1977. Flue gas samples were collected for modified EPA Level 1 analysis; analytical results are reported. Air emissions from the combustion of ...

  13. ELECTROMAGNETIC STIRRING IN ZONE REFINING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, I.; Frank, F.C.; Marshall, S.

    1958-02-01

    The efficiency of the zone refining process can obviously be increased by stirring the molten zone to disperse the impurity-rich layer at the solid- liquid surface. Induction heating is sometimes preferred to radiant heat because it produces more convection, but no marked improvement has been reported. Pfann and Dorsi(1967) have described a method of stirring the melt by passing an electric current through the ingot and compressing a magnetic field across the molten zone. Preliminary results obtained by using a rotating magnetic field us the stirring agent during the purification of aluminum are described. (A.C.)

  14. Refining Students' Explanations of an Unfamiliar Physical Phenomenon-Microscopic Friction

    NASA Astrophysics Data System (ADS)

    Corpuz, Edgar De Guzman; Rebello, N. Sanjay

    2017-08-01

    The first phase of this multiphase study involves modeling of college students' thinking of friction at the microscopic level. Diagnostic interviews were conducted with 11 students with different levels of physics backgrounds. A phenomenographic approach of data analysis was used to generate categories of responses which subsequently were used to generate a model of explanation. Most of the students interviewed consistently used mechanical interactions in explaining microscopic friction. According to these students, friction is due to the interlocking or rubbing of atoms. Our data suggest that students' explanations of microscopic friction are predominantly influenced by their macroscopic experiences. In the second phase of the research, teaching experiment was conducted with 18 college students to investigate how students' explanations of microscopic friction can be refined by a series of model-building activities. Data were analyzed using Redish's two-level transfer framework. Our results show that through sequences of hands-on and minds-on activities, including cognitive dissonance and resolution, it is possible to facilitate the refinement of students' explanations of microscopic friction. The activities seemed to be productive in helping students activate associations that refine their ideas about microscopic friction.

  15. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  16. Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.

    2017-01-01

    This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.

  17. 40 CFR 80.1652 - Reporting requirements for gasoline refiners, gasoline importers, oxygenate producers, and...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Reporting requirements for gasoline refiners, gasoline importers, oxygenate producers, and oxygenate importers. 80.1652 Section 80.1652... FUELS AND FUEL ADDITIVES Gasoline Sulfur § 80.1652 Reporting requirements for gasoline refiners...

  18. Grain refinement of high strength steels to improve cryogenic toughness

    NASA Technical Reports Server (NTRS)

    Rush, H. F.

    1985-01-01

    Grain-refining techniques using multistep heat treatments to reduce the grain size of five commercial high-strength steels were investigated. The goal of this investigation was to improve the low-temperature toughness as measured by Charpy V-notch impact test without a significant loss in tensile strength. The grain size of four of five alloys investigated was successfully reduced up to 1/10 of original size or smaller with increases in Charpy impact energy of 50 to 180 percent at -320 F. Tensile properties were reduced from 0 to 25 percent for the various alloys tested. An unexpected but highly beneficial side effect from grain refining was improved machinability.

  19. COMET-AR User's Manual: COmputational MEchanics Testbed with Adaptive Refinement

    NASA Technical Reports Server (NTRS)

    Moas, E. (Editor)

    1997-01-01

    The COMET-AR User's Manual provides a reference manual for the Computational Structural Mechanics Testbed with Adaptive Refinement (COMET-AR), a software system developed jointly by Lockheed Palo Alto Research Laboratory and NASA Langley Research Center under contract NAS1-18444. The COMET-AR system is an extended version of an earlier finite element based structural analysis system called COMET, also developed by Lockheed and NASA. The primary extensions are the adaptive mesh refinement capabilities and a new "object-like" database interface that makes COMET-AR easier to extend further. This User's Manual provides a detailed description of the user interface to COMET-AR from the viewpoint of a structural analyst.

  20. Laser furnace and method for zone refining of semiconductor wafers

    NASA Technical Reports Server (NTRS)

    Griner, Donald B. (Inventor); zur Burg, Frederick W. (Inventor); Penn, Wayne M. (Inventor)

    1988-01-01

    A method of zone refining a crystal wafer (116 FIG. 1) comprising the steps of focusing a laser beam to a small spot (120) of selectable size on the surface of the crystal wafer (116) to melt a spot on the crystal wafer, scanning the small laser beam spot back and forth across the surface of the crystal wafer (116) at a constant velocity, and moving the scanning laser beam across a predetermined zone of the surface of the crystal wafer (116) in a direction normal to the laser beam scanning direction and at a selectible velocity to melt and refine the entire crystal wafer (116).