Predictive Software Cost Model Study. Volume I. Final Technical Report.
1980-06-01
development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Computer simulation of refining process of a high consistency disc refiner based on CFD
NASA Astrophysics Data System (ADS)
Wang, Ping; Yang, Jianwei; Wang, Jiahui
2017-08-01
In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.
Brunger, Axel T; Das, Debanu; Deacon, Ashley M; Grant, Joanna; Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Levitt, Michael; Schröder, Gunnar F
2012-04-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.
Brunger, Axel T.; Das, Debanu; Deacon, Ashley M.; Grant, Joanna; Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Levitt, Michael; Schröder, Gunnar F.
2012-01-01
Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence. PMID:22505259
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao
2017-10-03
Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
DOE Office of Scientific and Technical Information (OSTI.GOV)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less
Collection of X-ray diffraction data from macromolecular crystals
Dauter, Zbigniew
2017-01-01
Diffraction data acquisition is the final experimental stage of the crystal structure analysis. All subsequent steps involve mainly computer calculations. Optimally measured and accurate data make the structure solution and refinement easier and lead to more faithful interpretation of the final models. Here, the important factors in data collection from macromolecular crystals are discussed and strategies appropriate for various applications, such as molecular replacement, anomalous phasing, atomic-resolution refinement etc., are presented. Criteria useful for judging the diffraction data quality are also discussed. PMID:28573573
2016-01-01
Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538
The Grain Structure of Castings: Some Aspects of Modelling
NASA Technical Reports Server (NTRS)
Hellawell, A.
1995-01-01
The efficacy of the modelling of the solidification of castings is typically tested against observed cooling curves and the final grain structures and sizes. Without thermo solutal convection, equiaxed grain formation is promoted by introduction of heterogeneous substrates into the melt, as grain refiners. With efficient thermo solutal convection, dendrite fragments from the mushy zone can act as an intrinsic source of equiaxed grains and resort to grain refining additions is unnecessary. The mechanisms of dendrite fragmentation and transport of these fragments are briefly considered.
A methodology for quadrilateral finite element mesh coarsening
Staten, Matthew L.; Benzley, Steven; Scott, Michael
2008-03-27
High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less
PDB_REDO: constructive validation, more than just looking for errors.
Joosten, Robbie P; Joosten, Krista; Murshudov, Garib N; Perrakis, Anastassis
2012-04-01
Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R(free) and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise `static' structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets.
PDB_REDO: constructive validation, more than just looking for errors
Joosten, Robbie P.; Joosten, Krista; Murshudov, Garib N.; Perrakis, Anastassis
2012-01-01
Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R free and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise ‘static’ structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets. PMID:22505269
Teaching Real Business Cycles to Undergraduates
ERIC Educational Resources Information Center
Brevik, Frode; Gartner, Manfred
2007-01-01
The authors review the graphical approach to teaching the real business cycle model introduced in Barro. They then look at where this approach cuts corners and suggest refinements. Finally, they compare graphical and exact models by means of impulse-response functions. The graphical models yield reliable qualitative results. Sizable quantitative…
DOT National Transportation Integrated Search
1995-09-01
One final overall observation from the workshop was that FHWA should consider case studies o f : industry practices. This approach differs from TS&W modeling, but could be used to build a microapproach for national TS&W analysis. For example, differe...
NASA Technical Reports Server (NTRS)
Jewett, M. E.; Kronauer, R. E.; Brown, E. N. (Principal Investigator)
1998-01-01
In 1990, Kronauer proposed a mathematical model of the effects of light on the human circadian pacemaker. Although this model predicted many general features of the response of the human circadian pacemaker to light exposure, additional data now available enable us to refine the original model. We first refined the original model by incorporating the results of a dose response curve to light into the model's predicted relationship between light intensity and the strength of the drive onto the pacemaker. Data from three bright light phase resetting experiments were then used to refine the amplitude recovery characteristics of the model. Finally, the model was tested and further refined using data from an extensive phase resetting experiment in which a 3-cycle bright light stimulus was presented against a background of dim light. In order to describe the results of the four resetting experiments, the following major refinements to the original model were necessary: (i) the relationship between light intensity (I) and drive onto the pacemaker was reduced from I1/3 to I0.23 for light levels between 150 and 10,000 lux; (ii) the van der Pol oscillator from the original model was replaced with a higher-order limit cycle oscillator so that amplitude recovery is slower near the singularity and faster near the limit cycle; (iii) a direct effect of light on circadian period (tau x) was incorporated into the model such that as I increases, tau x decreases, which is in accordance with "Aschoff's rule". This refined model generates the following testable predictions: it should be difficult to enhance normal circadian amplitude via bright light; near the critical point of a type 0 phase response curve (PRC) the slope should be steeper than it is in a type 1 PRC; and circadian period measured during forced desynchrony should be directly affected by ambient light intensity.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
High Temperature Test Facility Preliminary RELAP5-3D Input Model Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.
ERIC Educational Resources Information Center
Mount Hood Community Coll., Gresham, OR.
This final performance report includes a third-party evaluation and a replication guide. The first section describes a project to develop and implement an articulated curriculum for grades 8-14 to prepare young people for entry into hospitality/tourism-related occupations. It discusses the refinement of existing models, pilot test, curriculum…
A Dissemination Model for New Technical Education Programs. Final Report.
ERIC Educational Resources Information Center
Hull, Daniel M.
The Technical Education Research Center-SW has conceived, tested, and refined a model for disseminating newly developed programs and materials throughout the nation. The model performed successfully in the dissemination of more than 50,000 educational units (modules) of Laser/Electro-Optics Technician (LEOT) materials during a four-year period…
Consequences of Symmetries on the Analysis and Construction of Turbulence Models
NASA Astrophysics Data System (ADS)
Razafindralandy, Dina; Hamdouni, Aziz
2006-05-01
Since they represent fundamental physical properties in turbulence (conservation laws, wall laws, Kolmogorov energy spectrum, ...), symmetries are used to analyse common turbulence models. A class of symmetry preserving turbulence models is proposed. This class is refined such that the models respect the second law of thermodynamics. Finally, an example of model belonging to the class is numerically tested.
Stepwise construction of a metabolic network in Event-B: The heat shock response.
Sanwal, Usman; Petre, Luigia; Petre, Ion
2017-12-01
There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shim, Moonsoo; Choi, Ho-Gil; Choi, Jeong-Hun; Yi, Kyung-Woo; Lee, Jong-Hyeon
2017-08-01
The purification of a LiCl-KCl salt mixture was carried out by a zone-refining process. To improve the throughput of zone refining, three heaters were installed in the zone refiner. The zone-refining method was used to grow pure LiCl-KCl salt ingots from a LiCl-KCl-CsCl-SrCl2 salt mixture. The main investigated parameters were the heater speed and the number of passes. From each zone-refined salt ingot, samples were collected axially along the salt ingot and the concentrations of Sr and Cs were determined. Experimental results show that the Sr and Cs concentrations at the initial region of the ingot were low and increased to a maximum at the final freezing region of the salt ingot. Concentration results of the zone-refined salt were compared with theoretical results furnished by the proposed model to validate its predictions. The keff values for Sr and Cs were 0.55 and 0.47, respectively. The correlation between the salt composition and separation behavior was also investigated. The keff values of the Sr in LiCl-KCl-SrCl2 and the Cs in LiCl-KCl-CsCl were found to be 0.53 and 0.44, respectively, by fitting the experimental data into the proposed model.
Improved ligand geometries in crystallographic refinement using AFITT in PHENIX
Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; ...
2016-08-31
Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows.PHENIX–AFITTrefinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentiallymore » difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows.PHENIX–AFITTrefinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combiningAFITTand thePHENIXsoftware suite on a data set of 189 protein–ligand PDB structures are presented. Refinements usingPHENIX–AFITTsignificantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. Finally, for the data presented,PHENIX–AFITTrefinements result in more chemically accurate models for small-molecule ligands.« less
An introduction to the partial credit model for developing nursing assessments.
Fox, C
1999-11-01
The partial credit model, which is a special case of the Rasch measurement model, was presented as a useful way to develop and refine complex nursing assessments. The advantages of the Rasch model over the classical psychometric model were presented including the lack of bias in the measurement process, the ability to highlight those items in need of refinement, the provision of information on congruence between the data and the model, and feedback on the usefulness of the response categories. The partial credit model was introduced as a way to develop complex nursing assessments such as performance-based assessments, because of the model's ability to accommodate a variety of scoring procedures. Finally, an application of the partial credit model was illustrated using the Practical Knowledge Inventory for Nurses, a paper-and-pencil instrument that measures on-the-job decision-making for nurses.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
...-82385] Notice of Availability of the Final Environmental Impact Statement for the UNEV Refined Liquid...) has prepared a Proposed Resource Management Plan Amendment (RMPA)/Final Environmental Impact Statement..., Tooele, Juab, Millard, Beaver, Iron, and Washington Counties in Utah; and Lincoln and Clark Counties in...
xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.
McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus
2014-09-01
X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.
Ab initio structure determination and refinement of a scorpion protein toxin.
Smith, G D; Blessing, R H; Ealick, S E; Fontecilla-Camps, J C; Hauptman, H A; Housset, D; Langs, D A; Miller, R
1997-09-01
The structure of toxin II from the scorpion Androctonus australis Hector has been determined ab initio by direct methods using SnB at 0.96 A resolution. For the purpose of this structure redetermination, undertaken as a test of the minimal function and the SnB program, the identity and sequence of the protein was withheld from part of the research team. A single solution obtained from 1 619 random atom trials was clearly revealed by the bimodal distribution of the final value of the minimal function associated with each individual trial. Five peptide fragments were identified from a conservative analysis of the initial E-map, and following several refinement cycles with X-PLOR, a model was built of the complete structure. At the end of the X-PLOR refinement, the sequence was compared with the published sequence and 57 of the 64 residues had been correctly identified. Two errors in sequence resulted from side chains with similar size while the rest of the errors were a result of severe disorder or high thermal motion in the side chains. Given the amino-acid sequence, it is estimated that the initial E-map could have produced a model containing 99% of all main-chain and 81% of side-chain atoms. The structure refinement was completed with PROFFT, including the contributions of protein H atoms, and converged at a residual of 0.158 for 30 609 data with F >or= 2sigma(F) in the resolution range 8.0-0.964 A. The final model consisted of 518 non-H protein atoms (36 disordered), 407 H atoms, and 129 water molecules (43 with occupancies less than unity). This total of 647 non-H atoms represents the largest light-atom structure solved to date.
2017-12-01
This final rule cancels the Episode Payment Models (EPMs) and Cardiac Rehabilitation (CR) Incentive Payment Model and rescinds the regulations governing these models. It also implements certain revisions to the Comprehensive Care for Joint Replacement (CJR) model, including: Giving certain hospitals selected for participation in the CJR model a one-time option to choose whether to continue their participation in the model; technical refinements and clarifications for certain payment, reconciliation and quality provisions; and a change to increase the pool of eligible clinicians that qualify as affiliated practitioners under the Advanced Alternative Payment Model (Advanced APM) track. An interim final rule with comment period is being issued in conjunction with this final rule in order to address the need for a policy to provide some flexibility in the determination of episode costs for providers located in areas impacted by extreme and uncontrollable circumstances.
Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel
2016-01-01
Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the "classical" risk assessment approach with the model-based approach. These comparisons showed that TK and TK-TD models can bring more realism to the risk assessment through the possibility to study realistic exposure scenarios and to simulate relevant mechanisms of effects (including delayed toxicity and recovery). Noticeably, using TK-TD models is currently the most relevant way to directly connect realistic exposure patterns to effects. We conclude with recommendations on how to properly use TK and TK-TD model in acute risk assessment for vertebrates. © 2015 SETAC.
Side impact test and analyses of a DOT-111 tank car : final report.
DOT National Transportation Integrated Search
2015-10-01
Transportation Technology Center, Inc. conducted a side impact test on a DOT-111 tank car to evaluate the performance of the : tank car under dynamic impact conditions and to provide data for the verification and refinement of a computational model. ...
Development of a Graduate Course in Computer-Aided Geometric Design.
ERIC Educational Resources Information Center
Ault, Holly K.
1991-01-01
Described is a course that focuses on theory and techniques for ideation and refinement of geometric models used in mechanical engineering design applications. The course objectives, course outline, a description of the facilities, sample exercises, and a discussion of final projects are included. (KR)
Refined structure of dimeric diphtheria toxin at 2.0 A resolution.
Bennett, M. J.; Choe, S.; Eisenberg, D.
1994-01-01
The refined structure of dimeric diphtheria toxin (DT) at 2.0 A resolution, based on 37,727 unique reflections (F > 1 sigma (F)), yields a final R factor of 19.5% with a model obeying standard geometry. The refined model consists of 523 amino acid residues, 1 molecule of the bound dinucleotide inhibitor adenylyl 3'-5' uridine 3' monophosphate (ApUp), and 405 well-ordered water molecules. The 2.0-A refined model reveals that the binding motif for ApUp includes residues in the catalytic and receptor-binding domains and is different from the Rossmann dinucleotide-binding fold. ApUp is bound in part by a long loop (residues 34-52) that crosses the active site. Several residues in the active site were previously identified as NAD-binding residues. Glu 148, previously identified as playing a catalytic role in ADP-ribosylation of elongation factor 2 by DT, is about 5 A from uracil in ApUp. The trigger for insertion of the transmembrane domain of DT into the endosomal membrane at low pH may involve 3 intradomain and 4 interdomain salt bridges that will be weakened at low pH by protonation of their acidic residues. The refined model also reveals that each molecule in dimeric DT has an "open" structure unlike most globular proteins, which we call an open monomer. Two open monomers interact by "domain swapping" to form a compact, globular dimeric DT structure. The possibility that the open monomer resembles a membrane insertion intermediate is discussed. PMID:7833807
Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...
2016-11-09
Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans
Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-09
... wind turbine generators; a substation; administration, operations and maintenance facilities... Action (the ``Refined Project''). Under the Refined Project configuration, only 112 wind turbines... Report for the Pattern Energy Group's Ocotillo Express Wind Energy Project and Proposed California Desert...
Refinement of NMR structures using implicit solvent and advanced sampling techniques.
Chen, Jianhan; Im, Wonpil; Brooks, Charles L
2004-12-15
NMR biomolecular structure calculations exploit simulated annealing methods for conformational sampling and require a relatively high level of redundancy in the experimental restraints to determine quality three-dimensional structures. Recent advances in generalized Born (GB) implicit solvent models should make it possible to combine information from both experimental measurements and accurate empirical force fields to improve the quality of NMR-derived structures. In this paper, we study the influence of implicit solvent on the refinement of protein NMR structures and identify an optimal protocol of utilizing these improved force fields. To do so, we carry out structure refinement experiments for model proteins with published NMR structures using full NMR restraints and subsets of them. We also investigate the application of advanced sampling techniques to NMR structure refinement. Similar to the observations of Xia et al. (J.Biomol. NMR 2002, 22, 317-331), we find that the impact of implicit solvent is rather small when there is a sufficient number of experimental restraints (such as in the final stage of NMR structure determination), whether implicit solvent is used throughout the calculation or only in the final refinement step. The application of advanced sampling techniques also seems to have minimal impact in this case. However, when the experimental data are limited, we demonstrate that refinement with implicit solvent can substantially improve the quality of the structures. In particular, when combined with an advanced sampling technique, the replica exchange (REX) method, near-native structures can be rapidly moved toward the native basin. The REX method provides both enhanced sampling and automatic selection of the most native-like (lowest energy) structures. An optimal protocol based on our studies first generates an ensemble of initial structures that maximally satisfy the available experimental data with conventional NMR software using a simplified force field and then refines these structures with implicit solvent using the REX method. We systematically examine the reliability and efficacy of this protocol using four proteins of various sizes ranging from the 56-residue B1 domain of Streptococcal protein G to the 370-residue Maltose-binding protein. Significant improvement in the structures was observed in all cases when refinement was based on low-redundancy restraint data. The proposed protocol is anticipated to be particularly useful in early stages of NMR structure determination where a reliable estimate of the native fold from limited data can significantly expedite the overall process. This refinement procedure is also expected to be useful when redundant experimental data are not readily available, such as for large multidomain biomolecules and in solid-state NMR structure determination.
First Prismatic Building Model Reconstruction from Tomosar Point Clouds
NASA Astrophysics Data System (ADS)
Sun, Y.; Shahzad, M.; Zhu, X.
2016-06-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel
2018-04-01
Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.
2012-01-01
Background Our companion paper discussed the yield benefits achieved by integrating deacetylation, mechanical refining, and washing with low acid and low temperature pretreatment. To evaluate the impact of the modified process on the economic feasibility, a techno-economic analysis (TEA) was performed based on the experimental data presented in the companion paper. Results The cost benefits of dilute acid pretreatment technology combined with the process alternatives of deacetylation, mechanical refining, and pretreated solids washing were evaluated using cost benefit analysis within a conceptual modeling framework. Control cases were pretreated at much lower acid loadings and temperatures than used those in the NREL 2011 design case, resulting in much lower annual ethanol production. Therefore, the minimum ethanol selling prices (MESP) of the control cases were $0.41-$0.77 higher than the $2.15/gallon MESP of the design case. This increment is highly dependent on the carbohydrate content in the corn stover. However, if pretreatment was employed with either deacetylation or mechanical refining, the MESPs were reduced by $0.23-$0.30/gallon. Combing both steps could lower the MESP further by $0.44 ~ $0.54. Washing of the pretreated solids could also greatly improve the final ethanol yields. However, the large capital cost of the solid–liquid separation unit negatively influences the process economics. Finally, sensitivity analysis was performed to study the effect of the cost of the pretreatment reactor and the energy input for mechanical refining. A 50% cost reduction in the pretreatment reactor cost reduced the MESP of the entire conversion process by $0.11-$0.14/gallon, while a 10-fold increase in energy input for mechanical refining will increase the MESP by $0.07/gallon. Conclusion Deacetylation and mechanical refining process options combined with low acid, low severity pretreatments show improvements in ethanol yields and calculated MESP for cellulosic ethanol production. PMID:22967479
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Nan; Carmona, Ruben; Sirak, Igor
Purpose: To demonstrate an efficient method for training and validation of a knowledge-based planning (KBP) system as a radiation therapy clinical trial plan quality-control system. Methods and Materials: We analyzed 86 patients with stage IB through IVA cervical cancer treated with intensity modulated radiation therapy at 2 institutions according to the standards of the INTERTECC (International Evaluation of Radiotherapy Technology Effectiveness in Cervical Cancer, National Clinical Trials Network identifier: 01554397) protocol. The protocol used a planning target volume and 2 primary organs at risk: pelvic bone marrow (PBM) and bowel. Secondary organs at risk were rectum and bladder. Initial unfiltered dose-volumemore » histogram (DVH) estimation models were trained using all 86 plans. Refined training sets were created by removing sub-optimal plans from the unfiltered sample, and DVH estimation models… and DVH estimation models were constructed by identifying 30 of 86 plans emphasizing PBM sparing (comparing protocol-specified dosimetric cutpoints V{sub 10} (percentage volume of PBM receiving at least 10 Gy dose) and V{sub 20} (percentage volume of PBM receiving at least 20 Gy dose) with unfiltered predictions) and another 30 of 86 plans emphasizing bowel sparing (comparing V{sub 40} (absolute volume of bowel receiving at least 40 Gy dose) and V{sub 45} (absolute volume of bowel receiving at least 45 Gy dose), 9 in common with the PBM set). To obtain deliverable KBP plans, refined models must inform patient-specific optimization objectives and/or priorities (an auto-planning “routine”). Four candidate routines emphasizing different tradeoffs were composed, and a script was developed to automatically re-plan multiple patients with each routine. After selection of the routine that best met protocol objectives in the 51-patient training sample (KBP{sub FINAL}), protocol-specific DVH metrics and normal tissue complication probability were compared for original versus KBP{sub FINAL} plans across the 35-patient validation set. Paired t tests were used to test differences between planning sets. Results: KBP{sub FINAL} plans outperformed manual planning across the validation set in all protocol-specific DVH cutpoints. The mean normal tissue complication probability for gastrointestinal toxicity was lower for KBP{sub FINAL} versus validation-set plans (48.7% vs 53.8%, P<.001). Similarly, the estimated mean white blood cell count nadir was higher (2.77 vs 2.49 k/mL, P<.001) with KBP{sub FINAL} plans, indicating lowered probability of hematologic toxicity. Conclusions: This work demonstrates that a KBP system can be efficiently trained and refined for use in radiation therapy clinical trials with minimal effort. This patient-specific plan quality control resulted in improvements on protocol-specific dosimetric endpoints.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarocki, John Charles; Zage, David John; Fisher, Andrew N.
LinkShop is a software tool for applying the method of Linkography to the analysis time-sequence data. LinkShop provides command line, web, and application programming interfaces (API) for input and processing of time-sequence data, abstraction models, and ontologies. The software creates graph representations of the abstraction model, ontology, and derived linkograph. Finally, the tool allows the user to perform statistical measurements of the linkograph and refine the ontology through direct manipulation of the linkograph.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
... and Tube From the People's Republic of China: Final Results and Partial Revocation of 2010/11... and tube (``copper pipe and tube'') from the People's Republic of China (``PRC''). The period of... published Seamless Refined Copper Pipe and Tube From the People's Republic of China: Preliminary Results of...
NASA Astrophysics Data System (ADS)
Shim, Moonsoo; Choi, Ho Gil; Yi, Kyung Woo; Hwang, Il Soon; Lee, Jong Hyeon
2016-11-01
The purification of LiCl salt mixture has traditionally been carried out by a melt crystallization process. To improve the throughput of zone refining, three heaters were installed in the zone refiner. The zone refining method was used to grow pure LiCl salt ingots from LiCl-CsCl-SrCl2 salt mixture. The main investigated parameters were the heater speed and the number of passes. A change in the LiCl crystal grain size was observed according to the horizontal direction. From each zone refined salt ingot, samples were collected horizontally. To analyze the concentrations of Sr and Cs, an inductively coupled plasma optical emission spectrometer and inductively coupled plasma mass spectrometer were used, respectively. The experimental results show that Sr and Cs concentrations at the initial region of the ingot were low and reached their peak at the final freezing region of the salt ingot. Concentration results of zone refined salt were compared with theoretical results yielded by the proposed model to validate its predictions. The keff of Sr and Cs were 0.13 and 0.11, respectively. The decontamination factors of Sr and Cs were 450 and 1650, respectively.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM, AND THE... transaction; (2) The date of the entry, transfer (only a refiner shall report transfers to the Licensing... license number; (5) The country of origin (entry of raw sugar) or final destination (refined exports...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM, AND THE... transaction; (2) The date of the entry, transfer (only a refiner shall report transfers to the Licensing... license number; (5) The country of origin (entry of raw sugar) or final destination (refined exports...
Code of Federal Regulations, 2014 CFR
2014-01-01
... AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM, AND THE... transaction; (2) The date of the entry, transfer (only a refiner shall report transfers to the Licensing... license number; (5) The country of origin (entry of raw sugar) or final destination (refined exports...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM, AND THE... transaction; (2) The date of the entry, transfer (only a refiner shall report transfers to the Licensing... license number; (5) The country of origin (entry of raw sugar) or final destination (refined exports...
Code of Federal Regulations, 2010 CFR
2010-01-01
... AGRICULTURE THE REFINED SUGAR RE-EXPORT PROGRAM, THE SUGAR CONTAINING PRODUCTS RE-EXPORT PROGRAM, AND THE... transaction; (2) The date of the entry, transfer (only a refiner shall report transfers to the Licensing... license number; (5) The country of origin (entry of raw sugar) or final destination (refined exports...
Mesh Convergence Requirements for Composite Damage Models
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
The ability of the finite element method to accurately represent the response of objects with intricate geometry and loading renders the finite element method as an extremely versatile analysis technique for structural analysis. Finite element analysis is routinely used in industry to calculate deflections, stress concentrations, natural frequencies, buckling loads, and much more. The method works by discretizing complex problems into smaller, simpler approximations that are valid over small uniform domains. For common analyses, the maximum size of the elements that can be used is often be determined by experience. However, to verify the quality of a solution, analyses with several levels of mesh refinement should be performed to ensure that the solution has converged. In recent years, the finite element method has been used to calculate the resistance of structures, and in particular that of composite structures. A number of techniques such as cohesive zone modeling, the virtual crack closure technique, and continuum damage modeling have emerged that can be used to predict cracking, delaminations, fiber failure, and other composite damage modes that lead to structural collapse. However, damage models present mesh refinement requirements that are not well understood. In this presentation, we examine different mesh refinement issues related to the representation of damage in composite materials. Damage process zone sizes and their corresponding mesh requirements will be discussed. The difficulties of modeling discontinuities and the associated need for regularization techniques will be illustrated, and some unexpected element size constraints will be presented. Finally, some of the difficulties in constructing models of composite structures capable of predicting transverse matrix cracking will be discussed. It will be shown that to predict the initiation and propagation of transverse matrix cracks, their density, and their saturation may require models that are significantly more refined than those that have been contemplated in the past.
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
75 FR 66761 - Agency Information Collection Activities: Final Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-29
... strengthen sanctions against Iran, the Act contains language prohibiting Ex-Im Bank from: Authoriz[ing] any... refiner that continues to: (A) Provide Iran with significant refined petroleum resources; (B) materially contribute to Iran's capability to import refined petroleum resources; or (C) allow Iran to maintain or...
Tabberer, Maggie; Gonzalez-McQuire, Sebastian; Muellerova, Hana; Briggs, Andrew H; Rutten-van Mölken, Maureen P M H; Chambers, Mike; Lomas, David A
2017-05-01
To develop and validate a new conceptual model (CM) of chronic obstructive pulmonary disease (COPD) for use in disease progression and economic modeling. The CM identifies and describes qualitative associations between disease attributes, progression and outcomes. A literature review was performed to identify any published CMs or literature reporting the impact and association of COPD disease attributes with outcomes. After critical analysis of the literature, a Steering Group of experts from the disciplines of health economics, epidemiology and clinical medicine was convened to develop a draft CM, which was refined using a Delphi process. The refined CM was validated by testing for associations between attributes using data from the Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints (ECLIPSE). Disease progression attributes included in the final CM were history and occurrence of exacerbations, lung function, exercise capacity, signs and symptoms (cough, sputum, dyspnea), cardiovascular disease comorbidities, 'other' comorbidities (including depression), body composition (body mass index), fibrinogen as a biomarker, smoking and demographic characteristics (age, gender). Mortality and health-related quality of life were determined to be the most relevant final outcome measures for this model, intended to be the foundation of an economic model of COPD. The CM is being used as the foundation for developing a new COPD model of disease progression and to provide a framework for the analysis of patient-level data. The CM is available as a reference for the implementation of further disease progression and economic models.
ERIC Educational Resources Information Center
Washington Univ., Seattle. Child Development and Mental Retardation Center.
The report documents the progress and accomplishments of the SEFAM (Supporting Extended Family Members) Program, which developed model programs for fathers, siblings, and grandparents. The first section summarizes staff efforts for five project objectives: (1) to develop, expand, test, and refine the pilot "Fathers and Infants/Toddlers"…
Protein structure modeling and refinement by global optimization in CASP12.
Hong, Seung Hwan; Joung, InSuk; Flores-Canales, Jose C; Manavalan, Balachandran; Cheng, Qianyi; Heo, Seungryong; Kim, Jong Yun; Lee, Sun Young; Nam, Mikyung; Joo, Keehyoung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2018-03-01
For protein structure modeling in the CASP12 experiment, we have developed a new protocol based on our previous CASP11 approach. The global optimization method of conformational space annealing (CSA) was applied to 3 stages of modeling: multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain re-modeling. For better template selection and model selection, we updated our model quality assessment (QA) method with the newly developed SVMQA (support vector machine for quality assessment). For 3D chain building, we updated our energy function by including restraints generated from predicted residue-residue contacts. New energy terms for the predicted secondary structure and predicted solvent accessible surface area were also introduced. For difficult targets, we proposed a new method, LEEab, where the template term played a less significant role than it did in LEE, complemented by increased contributions from other terms such as the predicted contact term. For TBM (template-based modeling) targets, LEE performed better than LEEab, but for FM targets, LEEab was better. For model refinement, we modified our CASP11 molecular dynamics (MD) based protocol by using explicit solvents and tuning down restraint weights. Refinement results from MD simulations that used a new augmented statistical energy term in the force field were quite promising. Finally, when using inaccurate information (such as the predicted contacts), it was important to use the Lorentzian function for which the maximal penalty arising from wrong information is always bounded. © 2017 Wiley Periodicals, Inc.
Formal analysis of imprecise system requirements with Event-B.
Le, Hong Anh; Nakajima, Shin; Truong, Ninh Thuan
2016-01-01
Formal analysis of functional properties of system requirements needs precise descriptions. However, the stakeholders sometimes describe the system with ambiguous, vague or fuzzy terms, hence formal frameworks for modeling and verifying such requirements are desirable. The Fuzzy If-Then rules have been used for imprecise requirements representation, but verifying their functional properties still needs new methods. In this paper, we propose a refinement-based modeling approach for specification and verification of such requirements. First, we introduce a representation of imprecise requirements in the set theory. Then we make use of Event-B refinement providing a set of translation rules from Fuzzy If-Then rules to Event-B notations. After that, we show how to verify both safety and eventuality properties with RODIN/Event-B. Finally, we illustrate the proposed method on the example of Crane Controller.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
John, Shalini; Thangapandian, Sundarapandian; Lee, Keun Woo
2012-01-01
Human pancreatic cholesterol esterase (hCEase) is one of the lipases found to involve in the digestion of large and broad spectrum of substrates including triglycerides, phospholipids, cholesteryl esters, etc. The presence of bile salts is found to be very important for the activation of hCEase. Molecular dynamic simulations were performed for the apoform and bile salt complexed form of hCEase using the co-ordinates of two bile salts from bovine CEase. The stability of the systems throughout the simulation time was checked and two representative structures from the highly populated regions were selected using cluster analysis. These two representative structures were used in pharmacophore model generation. The generated pharmacophore models were validated and used in database screening. The screened hits were refined for their drug-like properties based on Lipinski's rule of five and ADMET properties. The drug-like compounds were further refined by molecular docking simulation using GOLD program based on the GOLD fitness score, mode of binding, and molecular interactions with the active site amino acids. Finally, three hits of novel scaffolds were selected as potential leads to be used in novel and potent hCEase inhibitor design. The stability of binding modes and molecular interactions of these final hits were re-assured by molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
Chakraborty, Abhishek
Detection of particulate matter thinly dispersed in a fluid medium with the aid of the difference in electrical conductivity between the pure fluid and the particles has been practiced at least since the last 50 to 60 years. The first such instruments were employed to measure cell counts in samples of biological fluid. Following a detailed study of the physics and principles operating within the device, called the Electric Sensing Zone (ESZ) principle, a new device called the Liquid Metal Cleanliness Analyzer (LiMCA) was invented which could measure and count particles of inclusions in molten metal. It provided a fast and fairly accurate tool to make online measurement of the quality of steel during refining and casting operations. On similar lines of development as the LiMCA, a water analogue of the device called, the Aqueous Particle Sensor (APS) was developed for physical modeling experiments of metal refining operations involving water models. The APS can detect and measure simulated particles of inclusions added to the working fluid (water). The present study involves the designing, building and final application of a new and improved APS in water modeling experiments to study inclusion behavior in a tundish operation. The custom built instrument shows superior performance and applicability in experiments involving physical modeling of metal refining operations, compared to its commercial counterparts. In addition to higher accuracy and range of operating parameters, its capability to take real-time experimental data for extended periods of time helps to reduce the total number of experiments required to reach a result, and makes it suitable for analyzing temporal changes occurring in unsteady systems. With the modern impetus on the quality of the final product of metallurgical operations, the new APS can prove to be an indispensable research tool to study and put forward innovative design and parametric changes in industrially practised metallurgical operations.
Design and realization of high quality prime farmland planning and management information system
NASA Astrophysics Data System (ADS)
Li, Manchun; Liu, Guohong; Liu, Yongxue; Jiang, Zhixin
2007-06-01
The article discusses the design and realization of a high quality prime farmland planning and management information system based on SDSS. Models in concept integration, management planning are used in High Quality Prime Farmland Planning in order to refine the current model system and the management information system is deigned with a triangular structure. Finally an example of Tonglu county high quality prime farmland planning and management information system is introduced.
ERIC Educational Resources Information Center
Holder, Loreta
This final report describes a federally-funded project that was designed to provide a model for service delivery to severely physically involved infants and their families living in the highly rural area of West Alabama. The project developed and refined an eclectic treatment approach known as Developmental Physical Management Techniques (DPMT).…
"2+2" Articulated Health Occupations Project. Nursing Program. Second Year Final Report.
ERIC Educational Resources Information Center
Paris Independent School District, TX.
A project was conducted to develop a 2 + 2 articulated training program in health careers to link the last 2 years of secondary and the first 2 years of postsecondary training. During the second year of the secondary project, the first year of training was implemented and the model program was further developed and refined. Project tasks included…
2011-01-01
Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409
40 CFR Appendix B to Part 97 - Final Section 126 Rule: Non-EGU Allocations, 2004-2007
Code of Federal Regulations, 2010 CFR
2010-07-01
... Lawrence SOUTH POINT ETHANOL 0744000009 B007 107 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B044 47 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B046 34 OH Lucas SUN REFINING... SHENANGO IRON & COKE WORKS 0050 006 18 PA Allegheny SHENANGO IRON & COKE WORKS 0050 009 15 PA Delaware SUN...
40 CFR Appendix B to Part 97 - Final Section 126 Rule: Non-EGU Allocations, 2004-2007
Code of Federal Regulations, 2013 CFR
2013-07-01
... Lawrence SOUTH POINT ETHANOL 0744000009 B007 107 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B044 47 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B046 34 OH Lucas SUN REFINING... SHENANGO IRON & COKE WORKS 0050 006 18 PA Allegheny SHENANGO IRON & COKE WORKS 0050 009 15 PA Delaware SUN...
40 CFR Appendix B to Part 97 - Final Section 126 Rule: Non-EGU Allocations, 2004-2007
Code of Federal Regulations, 2012 CFR
2012-07-01
... Lawrence SOUTH POINT ETHANOL 0744000009 B007 107 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B044 47 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B046 34 OH Lucas SUN REFINING... SHENANGO IRON & COKE WORKS 0050 006 18 PA Allegheny SHENANGO IRON & COKE WORKS 0050 009 15 PA Delaware SUN...
40 CFR Appendix B to Part 97 - Final Section 126 Rule: Non-EGU Allocations, 2004-2007
Code of Federal Regulations, 2014 CFR
2014-07-01
... Lawrence SOUTH POINT ETHANOL 0744000009 B007 107 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B044 47 OH Lucas SUN REFINING & MARKETING CO, TOLEDO REF 0448010246 B046 34 OH Lucas SUN REFINING... SHENANGO IRON & COKE WORKS 0050 006 18 PA Allegheny SHENANGO IRON & COKE WORKS 0050 009 15 PA Delaware SUN...
Feasibility of developing LSI microcircuit reliability prediction models
NASA Technical Reports Server (NTRS)
Ryerson, C. M.
1972-01-01
In the proposed modeling approach, when any of the essential key factors are not known initially, they can be approximated in various ways with a known impact on the accuracy of the final predictions. For example, on any program where reliability predictions are started at interim states of project completion, a-priori approximate estimates of the key factors are established for making preliminary predictions. Later these are refined for greater accuracy as subsequent program information of a more definitive nature becomes available. Specific steps to develop, validate and verify these new models are described.
Ke, A; Barter, Z; Rowland‐Yeo, K
2016-01-01
In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug‐drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0–1.7‐fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752
The Rene 150 directionally solidified superalloy turbine blades, volume 1
NASA Technical Reports Server (NTRS)
Deboer, G. J.
1981-01-01
Turbine blade design and analysis, preliminary Rene 150 system refinement, coating adaptation and evaluation, final Rene 150 system refinement, component-test blade production and evaluation, engine-test blade production, and engine test are discussed.
ERIC Educational Resources Information Center
Schure, Alexander
A computer-based system model for the monitoring and management of the instructional process was conceived, developed and refined through the techniques of systems analysis. This report describes the various aspects and components of this project in a series of independent and self-contained units. The first unit provides an overview of the entire…
Computational cost of two alternative formulations of Cahn-Hilliard equations
NASA Astrophysics Data System (ADS)
Paszyński, Maciej; Gurgul, Grzegorz; Łoś, Marcin; Szeliga, Danuta
2018-05-01
In this paper we propose two formulations of Cahn-Hilliard equations, which have several applications in cancer growth modeling and material science phase-field simulations. The first formulation uses one C4 partial differential equations (PDEs) the second one uses two C2 PDEs. Finally, we compare the computational costs of direct solvers for both formulations, using the refined isogeometric analysis (rIGA) approach.
Petroleum Refining Industry Final Air Toxics Rule Fact Sheets
This page contains a July 1995 fact sheet for the final NESHAP for Petroleum Refineries. This page also contains a June 2013 fact sheet with information regarding the final amendments to the 2013 final rule for the NESHAP.
Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models
NASA Astrophysics Data System (ADS)
de Diesbach, Pablo Brice
This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.
La Agencia de Protección Ambiental (EPA, por sus siglas en inglés) emitió un reglamento final que mejorará significativamente la calidad del aire en los vecindarios cercanos a las refinerías de petróleo mediante un control más exhaustivo de las emisiones a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, B; Peter, D; Covellone, B
2009-07-02
Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less
Refining of metallurgical-grade silicon
NASA Technical Reports Server (NTRS)
Dietl, J.
1986-01-01
A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.
High-Q Superconducting Coplanar Waveguide Resonators for Integration into Molecule Ion Traps
2010-05-01
V12C (3.13) 4 and We = V12 (3.14) 4 w 2 L’ finally yielding 2Wm R Q = wo m - w0L= woRC, (3.15) where wo = 1/ vLC is the resonant frequency of the...small. The primary challenge with simulating the microresonators was refining the mesh while remaining under memory limits of the modeling computer. It
ERIC Educational Resources Information Center
Goodman, Kenneth S.; Goodman, Yetta M.
Research conducted to refine and perfect a theory and model of the reading process is presented in this report. Specifically, studies of the reading miscues of 96 students who were either speakers of English as a second language or of stable, rural dialects are detailed. Chapters deal with the following topics: methodology, the reading process,…
On macromolecular refinement at subatomic resolution withinteratomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-01-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035
On macromolecular refinement at subatomic resolution with interatomic scatterers.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2007-11-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
K-Means Subject Matter Expert Refined Topic Model Methodology
2017-01-01
Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c
NASA Astrophysics Data System (ADS)
Roecker, S.; Thurber, C.; Shuler, A.; Liu, Y.; Zhang, H.; Powell, L.
2005-12-01
Five years of effort collecting and analyzing earthquake and explosion data in the vicinity of the SAFOD drill site culminated in the determination of the final trajectory for summer 2005's Phase 2 drilling. The trajectory was defined to optimize the chance of reaching one of two adjacent M2 "target earthquake" fault patches, whose centroids are separated horizontally by about 50 meters, with one or more satellite coreholes planned for Phase 3 drilling in summer 2007. Some of the most critical data for the final targeting were explosion data recorded on a Paulsson Geophysical Services, Inc., 80-element 3-component borehole string and earthquake data recorded on a pair of 3-component Duke University geophones in the SAFOD borehole. We are now utilizing the full 5-year dataset to refine our knowledge of three-dimensional (3D) crustal structure, wave propagation characteristics, and earthquake locations around SAFOD. These efforts are proceeding in parallel in several directions. Improved picks from a careful reanalysis of shear waves observed on the PASO array will be used in deriving an improved tomographic 3D wavespeed model. We are using finite-difference waveform modeling to investigate waveform complexity for earthquakes in and near the target region, including fault-zone head waves and strong secondary S-wave arrivals. A variety of waveform imaging methods are being applied to image fine-scale 3D structure and subsurface scatterers, including fault zones. In the process, we aim to integrate geophysical logging and geologic observations with our models to try to associate the target region earthquake activity, which is occurring on two fault strands about 280 meters apart, with shear zones encountered in the SAFOD Phase-2 borehole. These observations will be agumented and the target earthquake locations further refined over the next 2 years through downhole and surface recording of natural earthquakes and surface shots conducted at PASO station locations.
GPCR-ModSim: A comprehensive web based solution for modeling G-protein coupled receptors
Esguerra, Mauricio; Siretskiy, Alexey; Bello, Xabier; Sallander, Jessica; Gutiérrez-de-Terán, Hugo
2016-01-01
GPCR-ModSim (http://open.gpcr-modsim.org) is a centralized and easy to use service dedicated to the structural modeling of G-protein Coupled Receptors (GPCRs). 3D molecular models can be generated from amino acid sequence by homology-modeling techniques, considering different receptor conformations. GPCR-ModSim includes a membrane insertion and molecular dynamics (MD) equilibration protocol, which can be used to refine the generated model or any GPCR structure uploaded to the server, including if desired non-protein elements such as orthosteric or allosteric ligands, structural waters or ions. We herein revise the main characteristics of GPCR-ModSim and present new functionalities. The templates used for homology modeling have been updated considering the latest structural data, with separate profile structural alignments built for inactive, partially-active and active groups of templates. We have also added the possibility to perform multiple-template homology modeling in a unique and flexible way. Finally, our new MD protocol considers a series of distance restraints derived from a recently identified conserved network of helical contacts, allowing for a smoother refinement of the generated models which is particularly advised when there is low homology to the available templates. GPCR- ModSim has been tested on the GPCR Dock 2013 competition with satisfactory results. PMID:27166369
Real-space refinement in PHENIX for cryo-EM and crystallography
Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...
2018-06-01
This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less
Real-space refinement in PHENIX for cryo-EM and crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.
This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less
EPA has published a Direct Final Rule that addresses requirements for parties that handle pipeline interface as well as addresses downstream quality assurance requirements for refiners (EPA publication # EPA-420-F-06-039).
ERIC Educational Resources Information Center
Garrett, Gary L.; Zinsmeister, Joanne T.
This document reports research focusing on physical therapists and physical therapist assistant role delineation refinement and verification; entry-level role determinations; and translation of these roles into an examination development protocol and examination blueprint specifications. Following an introduction, section 2 describes the survey…
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
Reconstruction of genome-scale human metabolic models using omics data.
Ryu, Jae Yong; Kim, Hyun Uk; Lee, Sang Yup
2015-08-01
The impact of genome-scale human metabolic models on human systems biology and medical sciences is becoming greater, thanks to increasing volumes of model building platforms and publicly available omics data. The genome-scale human metabolic models started with Recon 1 in 2007, and have since been used to describe metabolic phenotypes of healthy and diseased human tissues and cells, and to predict therapeutic targets. Here we review recent trends in genome-scale human metabolic modeling, including various generic and tissue/cell type-specific human metabolic models developed to date, and methods, databases and platforms used to construct them. For generic human metabolic models, we pay attention to Recon 2 and HMR 2.0 with emphasis on data sources used to construct them. Draft and high-quality tissue/cell type-specific human metabolic models have been generated using these generic human metabolic models. Integration of tissue/cell type-specific omics data with the generic human metabolic models is the key step, and we discuss omics data and their integration methods to achieve this task. The initial version of the tissue/cell type-specific human metabolic models can further be computationally refined through gap filling, reaction directionality assignment and the subcellular localization of metabolic reactions. We review relevant tools for this model refinement procedure as well. Finally, we suggest the direction of further studies on reconstructing an improved human metabolic model.
Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases
NASA Astrophysics Data System (ADS)
Vilhelmsen, T. N.; Christensen, S.
2009-12-01
In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12
2017-11-07
This final rule updates the home health prospective payment system (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor, effective for home health episodes of care ending on or after January 1, 2018. This rule also: Updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the third year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between calendar year (CY) 2012 and CY 2014; and discusses our efforts to monitor the potential impacts of the rebasing adjustments that were implemented in CY 2014 through CY 2017. In addition, this rule finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model and to the Home Health Quality Reporting Program (HH QRP). We are not finalizing the implementation of the Home Health Groupings Model (HHGM) in this final rule.
Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei
2015-12-28
Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.
Unstructured 3D Delaunay mesh generation applied to planes, trains and automobiles
NASA Technical Reports Server (NTRS)
Blake, Kenneth R.; Spragle, Gregory S.
1993-01-01
Technical issues associated with domain-tessellation production, including initial boundary node triangulation and volume mesh refinement, are presented for the 'TGrid' 3D Delaunay unstructured grid generation program. The approach employed is noted to be capable of preserving predefined triangular surface facets in the final tessellation. The capabilities of the approach are demonstrated by generating grids about an entire fighter aircraft configuration, a train, and a wind tunnel model of an automobile.
A Comparison of the Behaviour of AlTiB and AlTiC Grain Refiners
NASA Astrophysics Data System (ADS)
Schneider, W.; Kearns, M. A.; McGarry, M. J.; Whitehead, A. J.
AlTiC master alloys present a new alternative to AlTiB grain refiners which have enjoyed pre-eminence in cast houses for several decades. Recent investigations have shown that, under defined casting conditions, AlTiC is a more efficient grain refiner than AlTiB, is less prone to agglomeration and is more resistant to poisoning by Zr, Cr. Moreover it is observed that there are differences in the mechanism of grain refinement for the different alloys. This paper describes the influence of melt temperature and addition rate on the performance of both types of grain refiner in DC casting tests on different wrought alloys. Furthermore the effects of combined additions of the grain refiners and the recycling behaviour of the treated alloys are presented. Results are compared with laboratory test data. Finally, mechanisms of grain refinement are discussed which are consistent with the observed differences in behaviour with AlTiC and AlTiB.
Improving the accuracy of macromolecular structure refinement at 7 Å resolution.
Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F
2012-06-06
In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Iorio, Francesco; Shrestha, Roshan L.; Levin, Nicolas; Boilot, Viviane; Garnett, Mathew J.; Saez-Rodriguez, Julio; Draviam, Viji M.
2015-01-01
We present a novel strategy to identify drug-repositioning opportunities. The starting point of our method is the generation of a signature summarising the consensual transcriptional response of multiple human cell lines to a compound of interest (namely the seed compound). This signature can be derived from data in existing databases, such as the connectivity-map, and it is used at first instance to query a network interlinking all the connectivity-map compounds, based on the similarity of their transcriptional responses. This provides a drug neighbourhood, composed of compounds predicted to share some effects with the seed one. The original signature is then refined by systematically reducing its overlap with the transcriptional responses induced by drugs in this neighbourhood that are known to share a secondary effect with the seed compound. Finally, the drug network is queried again with the resulting refined signatures and the whole process is carried on for a number of iterations. Drugs in the final refined neighbourhood are then predicted to exert the principal mode of action of the seed compound. We illustrate our approach using paclitaxel (a microtubule stabilising agent) as seed compound. Our method predicts that glipizide and splitomicin perturb microtubule function in human cells: a result that could not be obtained through standard signature matching methods. In agreement, we find that glipizide and splitomicin reduce interphase microtubule growth rates and transiently increase the percentage of mitotic cells–consistent with our prediction. Finally, we validated the refined signatures of paclitaxel response by mining a large drug screening dataset, showing that human cancer cell lines whose basal transcriptional profile is anti-correlated to them are significantly more sensitive to paclitaxel and docetaxel. PMID:26452147
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-29
... that revocation of the antidumping duty orders would be likely to lead to continuation or recurrence of..., refined (or purified) sulfanilic acid and sodium salt of sulfanilic acid. Sulfanilic acid is a synthetic... aniline, and 1.0 percent maximum alkali insoluble materials. Refined sulfanilic acid, also classifiable...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
... end, expanded end, crimped end, threaded), coating (e.g., plastic, paint), insulation, attachments (e... products, including ``line sets'' of seamless refined copper tubes (with or without fittings or insulation) suitable for connecting an outdoor air conditioner or heat pump to an indoor evaporator unit. The phrase...
A geometry-adaptive IB-LBM for FSI problems at moderate and high Reynolds numbers
NASA Astrophysics Data System (ADS)
Tian, Fangbao; Xu, Lincheng; Young, John; Lai, Joseph C. S.
2017-11-01
An FSI framework combining the LBM and an improved IBM is introduced for FSI problems at moderate and high Reynolds numbers. In this framework, the fluid dynamics is obtained by the LBM. The FSI boundary conditions are handled by an improved IBM based on the feedback scheme where the feedback coefficient is mathematically derived and explicitly approximated. The Lagrangian force is divided into two parts: one is caused by the mismatching of the flow velocity and the boundary velocity at previous time step, and the other is caused by the boundary acceleration. Such treatment significantly enhances the numerical stability. A geometry-adaptive refinement is applied to provide fine resolution around the immersed geometries. The overlapping grids between two adjacent refinements consist of two layers. The movement of fluid-structure interfaces only causes adding or removing grids at the boundaries of refinements. Finally, the classic Smagorinsky large eddy simulation model is incorporated into the framework to model turbulent flows at relatively high Reynolds numbers. Several validation cases are conducted to verify the accuracy and fidelity of the present solver over a range of Reynolds numbers. Mr L. Xu acknowledges the support of the University International Postgraduate Award by University of New South Wales. Dr. F.-B. Tian is the recipient of an Australian Research Council Discovery Early Career Researcher Award (Project Number DE160101098).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dongsheng; Lavender, Curt
2015-05-08
Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less
Simulation, guidance and navigation of the B-737 for rollout and turnoff using MLS measurements
NASA Technical Reports Server (NTRS)
Pines, S.; Schmidt, S. F.; Mann, F.
1975-01-01
A simulation program is described for the B-737 aircraft in landing approach, a touchdown, rollout and turnoff for normal and CAT III weather conditions. Preliminary results indicate that microwave landing systems can be used in place of instrument landing systems landing aids and that a single magnetic cable can be used for automated rollout and turnoff. Recommendations are made for further refinement of the model and additional testing to finalize a set of guidance laws for rollout and turnoff.
2012-06-04
central Tibetan Plateau. Automated hypocenter locations in south- central Tibet were finalized. Refinements included an update of the model used for... central Tibet. A subset of ~7,900 events with 25+ arrivals is considered well-located based on kilometer-scale differences relative to manually located...propagation in the Nepal Himalaya and the south- central Tibetan Plateau. The 2002–2005 experiment consisted of 233 stations along a dense 800 km linear
2016-07-24
60th Medical Group (AMC), Travis AFB, CA INSTITUTIONAL ANIMAL CARE AND USE COMMITTEE (IACUC) FINAL REPORT SUMMARY (Please type all information. Use...TRIENNIAL REVISION DATE: N/A FUNDING SOURCE: HQ USAF SG 1. RECORD OF ANIMAL USAGE: Animal Species: Total # Approved # Used this FY Total # Used to...Restraint Training: non-Live Animal Health Promotion Research: Survival (chronic) Prevention x_ Research: non-Survival (acute) Utilization Mgt. Other
Combined PDF and Rietveld studies of ADORable zeolites and the disordered intermediate IPC-1P.
Morris, Samuel A; Wheatley, Paul S; Položij, Miroslav; Nachtigall, Petr; Eliášová, Pavla; Čejka, Jiří; Lucas, Tim C; Hriljac, Joseph A; Pinar, Ana B; Morris, Russell E
2016-09-28
The disordered intermediate of the ADORable zeolite UTL has been structurally confirmed using the pair distribution function (PDF) technique. The intermediate, IPC-1P, is a disordered layered compound formed by the hydrolysis of UTL in 0.1 M hydrochloric acid solution. Its structure is unsolvable by traditional X-ray diffraction techniques. The PDF technique was first benchmarked against high-quality synchrotron Rietveld refinements of IPC-2 (OKO) and IPC-4 (PCR) - two end products of IPC-1P condensation that share very similar structural features. An IPC-1P starting model derived from density functional theory was used for the PDF refinement, which yielded a final fit of Rw = 18% and a geometrically reasonable structure. This confirms the layers do stay intact throughout the ADOR process and shows PDF is a viable technique for layered zeolite structure determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knapik, Aleksandra Alicja; Petkowski, Janusz Jurand; Otwinowski, Zbyszek
2014-10-02
RutC is the third enzyme in the Escherichia coli rut pathway of uracil degradation. RutC belongs to the highly conserved YjgF family of proteins. The structure of the RutC protein was determined and refined to 1.95 Å resolution. This crystal belonged to space group P21212 and contained six molecules in the asymmetric unit. The structure was solved by SAD phasing and was refined to an Rwork of 19.3% (Rfree = 21.7%). Moreover, the final model revealed that this protein has a Bacillus chorismate mutase-like fold and forms a homotrimer with a hydrophobic cavity in the center of the structure andmore » ligand-binding clefts between two subunits. A likely function for RutC is the reduction of peroxy-aminoacrylate to aminoacrylate as a part of a detoxification process.« less
Van, Connie; Costa, Daniel; Mitchell, Bernadette; Abbott, Penny; Krass, Ines
2012-01-01
Existing validated measures of pharmacist-physician collaboration focus on measuring attitudes toward collaboration and do not measure frequency of collaborative interactions. To develop and validate an instrument to measure the frequency of collaboration between pharmacists and general practitioners (GPs) from the pharmacist's perspective. An 11-item Pharmacist Frequency of Interprofessional Collaboration Instrument (FICI-P) was developed and administered to 586 pharmacists in 8 divisions of general practice in New South Wales, Australia. The initial items were informed by a review of the literature in addition to interviews of pharmacists and GPs. Items were subjected to principal component and Rasch analyses to determine each item's and the overall measure's psychometric properties and for any needed refinements. Two hundred and twenty four (38%) of pharmacist surveys were completed and returned. Principal component analysis suggested removal of 1 item for a final 1-factor solution. The refined 10-item FICI-P demonstrated internal consistency reliability at Cronbach's alpha=0.90. After collapsing the original 5-point response scale to a 4-point response scale, the refined FICI-P demonstrated fit to the Rasch model. Criterion validity of the FICI-P was supported by the correlation of FICI-P scores with scores on a previously validated Physician-Pharmacist Collaboration Instrument. Validity was also supported by predicted differences in FICI-P scores between subgroups of respondents stratified on age, colocation with GPs, and interactions during the intern-training period. The refined 10-item FICI-P was shown to have good internal consistency, criterion validity, and fit to the Rasch model. The creation of such a tool may allow for the measure of impact in the evaluation of interventions designed to improve interprofessional collaboration between GPs and pharmacists. Copyright © 2012 Elsevier Inc. All rights reserved.
Pirhadi, Somayeh; Ghasemi, Jahan B
2012-12-01
Non-nucleoside reverse transcriptase inhibitors (NNRTIs) have gained a definitive place due to their unique antiviral potency, high specificity and low toxicity in antiretroviral combination therapies used to treat HIV. In this study, chemical feature based pharmacophore models of different classes of NNRT inhibitors of HIV-1 have been developed. The best HypoRefine pharmacophore model, Hypo 1, which has the best correlation coefficient (0.95) and the lowest RMS (0.97), contains two hydrogen bond acceptors, one hydrophobic and one ring aromatic feature, as well as four excluded volumes. Hypo 1 was further validated by test set and Fischer validation method. The best pharmacophore model was then utilized as a 3D search query to perform a virtual screening to retrieve potential inhibitors. The hit compounds were subsequently subjected to filtering by Lipinski's rule of five and docking studies by Libdock and Gold methods to refine the retrieved hits. Finally, 7 top ranked compounds based on Gold score fitness function were subjected to in silico ADME studies to investigate for compliance with the standard ranges. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Developing empirically supported theories of change for housing investment and health
Thomson, Hilary; Thomas, Sian
2015-01-01
The assumption that improving housing conditions can lead to improved health may seem a self-evident hypothesis. Yet evidence from intervention studies suggests small or unclear health improvements, indicating that further thought is required to refine this hypothesis. Articulation of a theory can help avoid a black box approach to research and practice and has been advocated as especially valuable for those evaluating complex social interventions like housing. This paper presents a preliminary theory of housing improvement and health based on a systematic review conducted by the authors. Following extraction of health outcomes, data on all socio-economic impacts were extracted by two independent reviewers from both qualitative and quantitative studies. Health and socio-economic outcome data from the better quality studies (n = 23/34) were mapped onto a one page logic models by two independent reviewers and a final model reflecting reviewer agreement was prepared. Where there was supporting evidence of links between outcomes these were indicated in the model. Two models of specific improvements (warmth & energy efficiency; and housing led renewal), and a final overall model were prepared. The models provide a visual map of the best available evidence on the health and socio-economic impacts of housing improvement. The use of a logic model design helps to elucidate the possible pathways between housing improvement and health and as such might be described as an empirically based theory. Changes in housing factors were linked to changes in socio-economic determinants of health. This points to the potential for longer term health impacts which could not be detected within the lifespan of the evaluations. The developed theories are limited by the available data and need to be tested and refined. However, in addition to providing one page summaries for evidence users, the theory may usefully inform future research on housing and health. PMID:25461878
Developing empirically supported theories of change for housing investment and health.
Thomson, Hilary; Thomas, Sian
2015-01-01
The assumption that improving housing conditions can lead to improved health may seem a self-evident hypothesis. Yet evidence from intervention studies suggests small or unclear health improvements, indicating that further thought is required to refine this hypothesis. Articulation of a theory can help avoid a black box approach to research and practice and has been advocated as especially valuable for those evaluating complex social interventions like housing. This paper presents a preliminary theory of housing improvement and health based on a systematic review conducted by the authors. Following extraction of health outcomes, data on all socio-economic impacts were extracted by two independent reviewers from both qualitative and quantitative studies. Health and socio-economic outcome data from the better quality studies (n = 23/34) were mapped onto a one page logic models by two independent reviewers and a final model reflecting reviewer agreement was prepared. Where there was supporting evidence of links between outcomes these were indicated in the model. Two models of specific improvements (warmth & energy efficiency; and housing led renewal), and a final overall model were prepared. The models provide a visual map of the best available evidence on the health and socio-economic impacts of housing improvement. The use of a logic model design helps to elucidate the possible pathways between housing improvement and health and as such might be described as an empirically based theory. Changes in housing factors were linked to changes in socio-economic determinants of health. This points to the potential for longer term health impacts which could not be detected within the lifespan of the evaluations. The developed theories are limited by the available data and need to be tested and refined. However, in addition to providing one page summaries for evidence users, the theory may usefully inform future research on housing and health. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... two producers/ exporters of the subject merchandise, GD Affiliates S. de R.L. de C.V. (Golden Dragon... Trading Co. Ltd.; 3) Golden Dragon Holding (Hong Kong) International, Ltd.; 4) GD Copper U.S.A. Inc.; 5... collectively referred to as Golden Dragon. See, e.g., Seamless Refined Copper Pipe and Tube From Mexico: Final...
NASA Astrophysics Data System (ADS)
Jiang, Bo; Wu, Meng; Sun, He; Wang, Zhilin; Zhao, Zhigang; Liu, Yazheng
2018-01-01
The austenite growth behavior of non-quenched and tempered steels (casted by continuous casting and molding casting processes) was studied. The austenite grain size of steel B casted by continuous casting process is smaller than that of steel A casted by molding casting process at the same heating parameters. The abnormal austenite growth temperature of the steels A and B are 950 °C and 1000 °C, respectively. Based on the results, the models for the austenite grain growth below and above the abnormal austenite growth temperature of the investigated steels were established. The dispersedly distributed fine particles MnS in steel B is the key factor refining the austenite grain by pinning the migration of austenite grain boundary. The elongated inclusions MnS are ineffective in preventing the austenite grain growth at high heating temperature. For the non-quenched and tempered steel, the continuous casting process should be adopted and the inclusion MnS should be elliptical, smaller in size and distributed uniformly in order to refine the final microstructure and also improve the mechanical properties.
Kinetic Modeling of a Heterogeneous Fenton Oxidative Treatment of Petroleum Refining Wastewater
Basheer Hasan, Diya'uddeen; Abdul Raman, Abdul Aziz; Wan Daud, Wan Mohd Ashri
2014-01-01
The mineralisation kinetics of petroleum refinery effluent (PRE) by Fenton oxidation were evaluated. Within the ambit of the experimental data generated, first-order kinetic model (FKM), generalised lumped kinetic model (GLKM), and generalized kinetic model (GKM) were tested. The obtained apparent kinetic rate constants for the initial oxidation step (k 2′), their final oxidation step (k 1′), and the direct conversion to endproducts step (k 3′) were 10.12, 3.78, and 0.24 min−1 for GKM; 0.98, 0.98, and nil min−1 for GLKM; and nil, nil, and >0.005 min−1 for FKM. The findings showed that GKM is superior in estimating the mineralization kinetics. PMID:24592152
Validation and Refinement of a Pain Information Model from EHR Flowsheet Data.
Westra, Bonnie L; Johnson, Steven G; Ali, Samira; Bavuso, Karen M; Cruz, Christopher A; Collins, Sarah; Furukawa, Meg; Hook, Mary L; LaFlamme, Anne; Lytle, Kay; Pruinelli, Lisiane; Rajchel, Tari; Settergren, Theresa Tess; Westman, Kathryn F; Whittenburg, Luann
2018-01-01
Secondary use of electronic health record (EHR) data can reduce costs of research and quality reporting. However, EHR data must be consistent within and across organizations. Flowsheet data provide a rich source of interprofessional data and represents a high volume of documentation; however, content is not standardized. Health care organizations design and implement customized content for different care areas creating duplicative data that is noncomparable. In a prior study, 10 information models (IMs) were derived from an EHR that included 2.4 million patients. There was a need to evaluate the generalizability of the models across organizations. The pain IM was selected for evaluation and refinement because pain is a commonly occurring problem associated with high costs for pain management. The purpose of our study was to validate and further refine a pain IM from EHR flowsheet data that standardizes pain concepts, definitions, and associated value sets for assessments, goals, interventions, and outcomes. A retrospective observational study was conducted using an iterative consensus-based approach to map, analyze, and evaluate data from 10 organizations. The aggregated metadata from the EHRs of 8 large health care organizations and the design build in 2 additional organizations represented flowsheet data from 6.6 million patients, 27 million encounters, and 683 million observations. The final pain IM has 30 concepts, 4 panels (classes), and 396 value set items. Results are built on Logical Observation Identifiers Names and Codes (LOINC) pain assessment terms and extend the need for additional terms to support interoperability. The resulting pain IM is a consensus model based on actual EHR documentation in the participating health systems. The IM captures the most important concepts related to pain. Schattauer GmbH Stuttgart.
NASA Astrophysics Data System (ADS)
Boo, G.; Fabrikant, S. I.; Leyk, S.
2015-08-01
In spatial epidemiology, disease incidence and demographic data are commonly summarized within larger regions such as administrative units because of privacy concerns. As a consequence, analyses using these aggregated data are subject to the Modifiable Areal Unit Problem (MAUP) as the geographical manifestation of ecological fallacy. In this study, we create small area disease estimates through dasymetric refinement, and investigate the effects on predictive epidemiological models. We perform a binary dasymetric refinement of municipality-aggregated dog tumor incidence counts in Switzerland for the year 2008 using residential land as a limiting ancillary variable. This refinement is expected to improve the quality of spatial data originally aggregated within arbitrary administrative units by deconstructing them into discontinuous subregions that better reflect the underlying population distribution. To shed light on effects of this refinement, we compare a predictive statistical model that uses unrefined administrative units with one that uses dasymetrically refined spatial units. Model diagnostics and spatial distributions of model residuals are assessed to evaluate the model performances in different regions. In particular, we explore changes in the spatial autocorrelation of the model residuals due to spatial refinement of the enumeration units in a selected mountainous region, where the rugged topography induces great shifts of the analytical units i.e., residential land. Such spatial data quality refinement results in a more realistic estimation of the population distribution within administrative units, and thus, in a more accurate modeling of dog tumor incidence patterns. Our results emphasize the benefits of implementing a dasymetric modeling framework in veterinary spatial epidemiology.
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2007-11-01
The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.
EPA is taking final action to revise the February 26, 2007 mobile source air toxics rule’s requirements that specify which benzene control technologies a refiner may utilize to qualify to generate early benzene credits.
Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks
NASA Astrophysics Data System (ADS)
Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi
We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.
Revisión de tecnología y riesgos: refinerías de petróleo - Hoja informativa para las comunidades
La Agencia de Protección Ambiental (EPA, por sus siglas en inglés) emitió un reglamento final que mejorará significativamente la calidad del aire en los vecindarios cercanos a las refinerías de petróleo mediante un control más exhaustivo de las emisiones a
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-08-18
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.
Refinements in the Los Alamos model of the prompt fission neutron spectrum
Madland, D. G.; Kahler, A. C.
2017-01-01
This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less
Structure refinement of membrane proteins via molecular dynamics simulations.
Dutagaci, Bercem; Heo, Lim; Feig, Michael
2018-07-01
A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.
Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups
NASA Astrophysics Data System (ADS)
Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella
2017-04-01
A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.
Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions
Mehl, S.; Hill, M.C.
2010-01-01
This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.
Studies of Tenuous Planetary Atmospheres
NASA Technical Reports Server (NTRS)
Combi, Michael R.
1998-01-01
The final report includes an overall project overview as well as scientific background summaries of dust and sodium in comets, and tenuous atmospheres of Jupiter's natural satellites. Progress and continuing work related to dust coma and tenuous atmospheric studies are presented. Also included are published articles written during the course of the report period. These are entitled: (1) On Europa's Magnetospheric Interaction: An MHD Simulation; (2) Dust-Gas Interrelations in Comets: Observations and Theory; and (3) Io's Plasma Environment During the Galileo Flyby: Global Three Dimensional MHD Modeling with Adaptive Mesh Refinement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
NASA Astrophysics Data System (ADS)
Belušić, Andreina; Prtenjak, Maja Telišman; Güttler, Ivan; Ban, Nikolina; Leutwyler, David; Schär, Christoph
2018-06-01
Over the past few decades the horizontal resolution of regional climate models (RCMs) has steadily increased, leading to a better representation of small-scale topographic features and more details in simulating dynamical aspects, especially in coastal regions and over complex terrain. Due to its complex terrain, the broader Adriatic region represents a major challenge to state-of-the-art RCMs in simulating local wind systems realistically. The objective of this study is to identify the added value in near-surface wind due to the refined grid spacing of RCMs. For this purpose, we use a multi-model ensemble composed of CORDEX regional climate simulations at 0.11° and 0.44° grid spacing, forced by the ERA-Interim reanalysis, a COSMO convection-parameterizing simulation at 0.11° and a COSMO convection-resolving simulation at 0.02° grid spacing. Surface station observations from this region and satellite QuikSCAT data over the Adriatic Sea have been compared against daily output obtained from the available simulations. Both day-to-day wind and its frequency distribution are examined. The results indicate that the 0.44° RCMs rarely outperform ERA-Interim reanalysis, while the performance of the high-resolution simulations surpasses that of ERA-Interim. We also disclose that refining the grid spacing to a few km is needed to properly capture the small-scale wind systems. Finally, we show that the simulations frequently yield the accurate angle of local wind regimes, such as for the Bora flow, but overestimate the associated wind magnitude. Finally, spectral analysis shows good agreement between measurements and simulations, indicating the correct temporal variability of the wind speed.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Refining and validating a conceptual model of Clinical Nurse Leader integrated care delivery.
Bender, Miriam; Williams, Marjory; Su, Wei; Hites, Lisle
2017-02-01
To empirically validate a conceptual model of Clinical Nurse Leader integrated care delivery. There is limited evidence of frontline care delivery models that consistently achieve quality patient outcomes. Clinical Nurse Leader integrated care delivery is a promising nursing model with a growing record of success. However, theoretical clarity is necessary to generate causal evidence of effectiveness. Sequential mixed methods. A preliminary Clinical Nurse Leader practice model was refined and survey items developed to correspond with model domains, using focus groups and a Delphi process with a multi-professional expert panel. The survey was administered in 2015 to clinicians and administrators involved in Clinical Nurse Leader initiatives. Confirmatory factor analysis and structural equation modelling were used to validate the measurement and model structure. Final sample n = 518. The model incorporates 13 components organized into five conceptual domains: 'Readiness for Clinical Nurse Leader integrated care delivery'; 'Structuring Clinical Nurse Leader integrated care delivery'; 'Clinical Nurse Leader Practice: Continuous Clinical Leadership'; 'Outcomes of Clinical Nurse Leader integrated care delivery'; and 'Value'. Sample data had good fit with specified model and two-level measurement structure. All hypothesized pathways were significant, with strong coefficients suggesting good fit between theorized and observed path relationships. The validated model articulates an explanatory pathway of Clinical Nurse Leader integrated care delivery, including Clinical Nurse Leader practices that result in improved care dynamics and patient outcomes. The validated model provides a basis for testing in practice to generate evidence that can be deployed across the healthcare spectrum. © 2016 John Wiley & Sons Ltd.
REFMAC5 for the refinement of macromolecular crystal structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.
The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
An exploration in mineral supply chain mapping using tantalum as an example
Soto-Viruet, Yadira; Menzie, W. David; Papp, John F.; Yager, Thomas R.
2013-01-01
This report uses the supply chain of tantalum (Ta) to investigate the complexity of mineral and metal supply chains in general and show how they can be mapped. A supply chain is made up of all the manufacturers, suppliers, information networks, and so forth, that provide the materials and parts that go into making up a final product. The mineral portion of the supply chain begins with mineral material in the ground (the ore deposit); extends through a series of processes that include mining, beneficiation, processing (smelting and refining), semimanufacture, and manufacture; and continues through transformation of the mineral ore into concentrates, refined mineral commodities, intermediate forms (such as metals and alloys), component parts, and, finally, complex products. This study analyses the supply chain of tantalum beginning with minerals in the ground to many of the final goods that contain tantalum.
The REFINEMENT Glossary of Terms: An International Terminology for Mental Health Systems Assessment.
Montagni, Ilaria; Salvador-Carulla, Luis; Mcdaid, David; Straßmayr, Christa; Endel, Florian; Näätänen, Petri; Kalseth, Jorid; Kalseth, Birgitte; Matosevic, Tihana; Donisi, Valeria; Chevreul, Karine; Prigent, Amélie; Sfectu, Raluca; Pauna, Carmen; Gutiérrez-Colosia, Mencia R; Amaddeo, Francesco; Katschnig, Heinz
2018-03-01
Comparing mental health systems across countries is difficult because of the lack of an agreed upon terminology covering services and related financing issues. Within the European Union project REFINEMENT, international mental health care experts applied an innovative mixed "top-down" and "bottom-up" approach following a multistep design thinking strategy to compile a glossary on mental health systems, using local services as pilots. The final REFINEMENT glossary consisted of 432 terms related to service provision, service utilisation, quality of care and financing. The aim of this study was to describe the iterative process and methodology of developing this glossary.
Localization, Localization, Localization
NASA Technical Reports Server (NTRS)
Parker, T.; Malin, M.; Golombek, M.; Duxbury, T.; Johnson, A.; Guinn, J.; McElrath, T.; Kirk, R.; Archinal, B.; Soderblom, L.
2004-01-01
Localization of the two Mars Exploration Rovers involved three independent approaches to place the landers with respect to the surface of Mars and to refine the location of those points on the surface with the Mars control net: 1) Track the spacecraft through entry, descent, and landing, then refine the final roll stop position by radio tracking and comparison to images taken during descent; 2) Locate features on the horizon imaged by the two rovers and compare them to the MOC and THEMIS VIS images, and the DIMES images on the two MER landers; and 3) 'Check' and refine locations by acquisition of MOC 1.5 meter and 50 cm/pixel images.
Refining mass formulas for astrophysical applications: A Bayesian neural network approach
NASA Astrophysics Data System (ADS)
Utama, R.; Piekarewicz, J.
2017-10-01
Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.
NASA Astrophysics Data System (ADS)
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.
Quantum group spin nets: Refinement limit and relation to spin foams
NASA Astrophysics Data System (ADS)
Dittrich, Bianca; Martin-Benito, Mercedes; Steinhaus, Sebastian
2014-07-01
So far spin foam models are hardly understood beyond a few of their basic building blocks. To make progress on this question, we define analogue spin foam models, so-called "spin nets," for quantum groups SU(2)k and examine their effective continuum dynamics via tensor network renormalization. In the refinement limit of this coarse-graining procedure, we find a vast nontrivial fixed-point structure beyond the degenerate and the BF phase. In comparison to previous work, we use fixed-point intertwiners, inspired by Reisenberger's construction principle [M. P. Reisenberger, J. Math. Phys. (N.Y.) 40, 2046 (1999)] and the recent work [B. Dittrich and W. Kaminski, arXiv:1311.1798], as the initial parametrization. In this new parametrization fine-tuning is not required in order to flow to these new fixed points. Encouragingly, each fixed point has an associated extended phase, which allows for the study of phase transitions in the future. Finally we also present an interpretation of spin nets in terms of melonic spin foams. The coarse-graining flow of spin nets can thus be interpreted as describing the effective coupling between two spin foam vertices or space time atoms.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Maddalon, Dal V.
1998-01-01
Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
A metabolite-centric view on flux distributions in genome-scale metabolic models
2013-01-01
Background Genome-scale metabolic models are important tools in systems biology. They permit the in-silico prediction of cellular phenotypes via mathematical optimisation procedures, most importantly flux balance analysis. Current studies on metabolic models mostly consider reaction fluxes in isolation. Based on a recently proposed metabolite-centric approach, we here describe a set of methods that enable the analysis and interpretation of flux distributions in an integrated metabolite-centric view. We demonstrate how this framework can be used for the refinement of genome-scale metabolic models. Results We applied the metabolite-centric view developed here to the most recent metabolic reconstruction of Escherichia coli. By compiling the balance sheets of a small number of currency metabolites, we were able to fully characterise the energy metabolism as predicted by the model and to identify a possibility for model refinement in NADPH metabolism. Selected branch points were examined in detail in order to demonstrate how a metabolite-centric view allows identifying functional roles of metabolites. Fructose 6-phosphate aldolase and the sedoheptulose bisphosphate bypass were identified as enzymatic reactions that can carry high fluxes in the model but are unlikely to exhibit significant activity in vivo. Performing a metabolite essentiality analysis, unconstrained import and export of iron ions could be identified as potentially problematic for the quality of model predictions. Conclusions The system-wide analysis of split ratios and branch points allows a much deeper insight into the metabolic network than reaction-centric analyses. Extending an earlier metabolite-centric approach, the methods introduced here establish an integrated metabolite-centric framework for the interpretation of flux distributions in genome-scale metabolic networks that can complement the classical reaction-centric framework. Analysing fluxes and their metabolic context simultaneously opens the door to systems biological interpretations that are not apparent from isolated reaction fluxes. Particularly powerful demonstrations of this are the analyses of the complete metabolic contexts of energy metabolism and the folate-dependent one-carbon pool presented in this work. Finally, a metabolite-centric view on flux distributions can guide the refinement of metabolic reconstructions for specific growth scenarios. PMID:23587327
NASA Astrophysics Data System (ADS)
Singh, Nidhi; Avery, Mitchell A.; McCurdy, Christopher R.
2007-09-01
Mycobacterium tuberculosis 1-deoxy- d-xylulose-5-phosphate reductoisomerase ( MtDXR) is a potential target for antitubercular chemotherapy. In the absence of its crystallographic structure, our aim was to develop a structural model of MtDXR. This will allow us to gain early insight into the structure and function of the enzyme and its likely binding to ligands and cofactors and thus, facilitate structure-based inhibitor design. To achieve this goal, initial models of MtDXR were generated using MODELER. The best quality model was refined using a series of minimizations and molecular dynamics simulations. A protein-ligand complex was also developed from the initial homology model of the target protein by including information about the known ligand as spatial restraints and optimizing the mutual interactions between the ligand and the binding site. The final model was evaluated on the basis of its ability to explain several site-directed mutagenesis data. Furthermore, a comparison of the homology model with the X-ray structure published in the final stages of the project shows excellent agreement and validates the approach. The knowledge gained from the current study should prove useful in the design and development of inhibitors as potential novel therapeutic agents against tuberculosis by either de novo drug design or virtual screening of large chemical databases.
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heberle, Frederick A; Pan, Jianjun; Standaert, Robert F
2012-01-01
Some of our recent work has resulted in the detailed structures of fully hydrated, fluid phase phosphatidylcholine (PC) and phosphatidylglycerol (PG) bilayers. These structures were obtained from the joint refinement of small-angle neutron and X-ray data using the scattering density profile (SDP) models developed by Ku erka et al. (Ku erka et al. 2012; Ku erka et al. 2008). In this review, we first discuss models for the standalone analysis of neutron or X-ray scattering data from bilayers, and assess the strengths and weaknesses inherent in these models. In particular, it is recognized that standalone data do not contain enoughmore » information to fully resolve the structure of inherently disordered fluid bilayers, and therefore may not provide a robust determination of bilayer structural parameters, including the much sought after area per lipid. We then discuss the development of matter density-based models (including the SDP model) that allow for the joint refinement of different contrast neutron and X-ray data sets, as well as the implementation of local volume conservation in the unit cell (i.e., ideal packing). Such models provide natural definitions of bilayer thicknesses (most importantly the hydrophobic and Luzzati thicknesses) in terms of Gibbs dividing surfaces, and thus allow for the robust determination of lipid areas through equivalent slab relationships between bilayer thickness and lipid volume. In the final section of this review, we discuss some of the significant findings/features pertaining to structures of PC and PG bilayers as determined from SDP model analyses.« less
Mehl, Steffen W.; Hill, Mary C.
2007-01-01
This report documents the addition of the multiple-refined-areas capability to shared node Local Grid Refinement (LGR) and Boundary Flow and Head (BFH) Package of MODFLOW-2005, the U.S. Geological Survey modular, three-dimensional, finite-difference ground-water flow model. LGR now provides the capability to simulate ground-water flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. The ability to have multiple, nonoverlapping areas of refinement is important in situations where there is more than one area of concern within a regional model. In this circumstance, LGR can be used to simulate these distinct areas with higher resolution grids. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. The BFH Package can be used to simulate these situations by using either the parent or child models independently.
Improving consensus structure by eliminating averaging artifacts
KC, Dukka B
2009-01-01
Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905
NASA Astrophysics Data System (ADS)
Tucker, Deborah L.
Purpose. The purpose of this grounded theory study was to refine, using a Delphi study process, the four categories of the theoretical model of the comprehensive knowledge base required by providers of professional development for K-12 teachers of science generated from a review of the literature. Methodology. This grounded theory study used data collected through a modified Delphi technique and interviews to refine and validate the literature-based knowledge base required by providers of professional development for K-12 teachers of science. Twenty-three participants, experts in the fields of science education, how people learn, instructional and assessment strategies, and learning contexts, responded to the study's questions. Findings. By "densifying" the four categories of the knowledge base, this study determined the causal conditions (the science subject matter knowledge), the intervening conditions (how people learn), the strategies (the effective instructional and assessment strategies), and the context (the context and culture of formal learning environments) surrounding the science professional development process. Eight sections were added to the literature-based knowledge base; the final model comprised of forty-nine sections. The average length of the operational definitions increased nearly threefold and the number of citations per operational definition increased more than twofold. Conclusions. A four-category comprehensive model that can serve as the foundation for the knowledge base required by science professional developers now exists. Subject matter knowledge includes science concepts, inquiry, the nature of science, and scientific habits of mind; how people learn includes the principles of learning, active learning, andragogy, variations in learners, neuroscience and cognitive science, and change theory; effective instructional and assessment strategies include constructivist learning and inquiry-based teaching, differentiation of instruction, making knowledge and thinking accessible to learners, automatic and fluent retrieval of nonscience-specific skills, and science assessment and assessment strategies, science-specific instructional strategies, and safety within a learning environment; and, contextual knowledge includes curriculum selection and implementation strategies and knowledge of building program coherence. Recommendations. Further research on the use of which specific instructional strategies identified in the refined knowledge base have positive, significant effect sizes for adult learners is recommended.
Refinement of protein termini in template-based modeling using conformational space annealing.
Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung
2011-09-01
The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.
Variability of Protein Structure Models from Electron Microscopy.
Monroe, Lyman; Terashi, Genki; Kihara, Daisuke
2017-04-04
An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
X-ray structure determination at low resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunger, Axel T., E-mail: brunger@stanford.edu; Department of Molecular and Cellular Physiology, Stanford University; Department of Neurology and Neurological Sciences, Stanford University
2009-02-01
Refinement is meaningful even at 4 Å or lower, but with present methodologies it should start from high-resolution crystal structures whenever possible. As an example of structure determination in the 3.5–4.5 Å resolution range, crystal structures of the ATPase p97/VCP, consisting of an N-terminal domain followed by a tandem pair of ATPase domains (D1 and D2), are discussed. The structures were originally solved by molecular replacement with the high-resolution structure of the N-D1 fragment of p97/VCP, whereas the D2 domain was manually built using its homology to the D1 domain as a guide. The structure of the D2 domain alonemore » was subsequently solved at 3 Å resolution. The refined model of D2 and the high-resolution structure of the N-D1 fragment were then used as starting models for re-refinement against the low-resolution diffraction data for full-length p97. The re-refined full-length models showed significant improvement in both secondary structure and R values. The free R values dropped by as much as 5% compared with the original structure refinements, indicating that refinement is meaningful at low resolution and that there is information in the diffraction data even at ∼4 Å resolution that objectively assesses the quality of the model. It is concluded that de novo model building is problematic at low resolution and refinement should start from high-resolution crystal structures whenever possible.« less
JT9D ceramic outer air seal system refinement program
NASA Technical Reports Server (NTRS)
Gaffin, W. O.
1982-01-01
The abradability and durability characteristics of the plasma sprayed system were improved by refinement and optimization of the plasma spray process and the metal substrate design. The acceptability of the final seal system for engine testing was demonstrated by an extensive rig test program which included thermal shock tolerance, thermal gradient, thermal cycle, erosion, and abradability tests. An interim seal system design was also subjected to 2500 endurance test cycles in a JT9D-7 engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiegs, T.N.
The Cooperative Research and Development Agreement (CRADA) was to develop composites of TiC-Ni{sub 3}Al with refined grain microstructures for application in diesel engine fuel injection devices. Grain refinement is important for improved wear resistance and high strength for the applications of interest. Attrition milling effectively reduces the initial particle size and leads to a reduction of the final grain size. However, an increase in the oxygen content occurs concomitantly with the grinding operation and decreased densification of the compacts occurs during sintering.
Zhang, Qinghai; Lin, Changhu; Duan, Wenjuan; Wang, Xiao; Luo, Aiqin
2013-12-12
pH-Zone refining counter-current chromatography was successfully applied to the preparative isolation and purification of six alkaloids from the ethanol extracts of Uncaria macrophylla Wall. Because of the low content of alkaloids (about 0.2%, w/w) in U. macrophylla Wall, the target compounds were enriched by pH-zone refining counter-current chromatography using a two-phase solvent system composed of petroleum ether-ethyl acetate-isopropanol-water (2:6:3:9, v/v), adding 10 mM triethylamine in organic stationary phase and 5 mM hydrochloric acid in aqueous mobile phase. Then pH-zone refining counter-current chromatography using the other two-phase solvent system was used for final purification. Six target compounds were finally isolated and purified by following two-phase solvent system composed of methyl tert-butyl ether (MTBE)-acetonitrile-water (4:0.5:5, v/v), adding triethylamine (TEA) (10 mM) to the organic phase and HCl (5 mM) to aqueous mobile phase. The separation of 2.8 g enriched total alkaloids yielded 36 mg hirsutine, 48 mg hirsuteine, 82 mg uncarine C, 73 mg uncarine E, 163 mg rhynchophylline, and 149 mg corynoxeine, all with purities above 96% as verified by HPLC Their structures were identified by electrospray ionization-mass spectrometry (ESI-MS) and 1H-NMR spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, John J.; Greer, Christopher B.; Carr, Adrianne E.
2014-10-01
The purpose of this study is to update a one-dimensional analytical groundwater flow model to examine the influence of potential groundwater withdrawal in support of utility-scale solar energy development at the Afton Solar Energy Zone (SEZ) as a part of the Bureau of Land Management’s (BLM’s) Solar Energy Program. This report describes the modeling for assessing the drawdown associated with SEZ groundwater pumping rates for a 20-year duration considering three categories of water demand (high, medium, and low) based on technology-specific considerations. The 2012 modeling effort published in the Final Programmatic Environmental Impact Statement for Solar Energy Development in Sixmore » Southwestern States (Solar PEIS; BLM and DOE 2012) has been refined based on additional information described below in an expanded hydrogeologic discussion.« less
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
Koparde, Vishal N.; Scarsdale, J. Neel; Kellogg, Glen E.
2011-01-01
Background The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery. Methodology An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology. Results The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. PMID:21246043
Model Assessment of the Impact on Ozone of Subsonic and Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Ko, Malcolm; Weisenstein, Debra; Danilin, Michael; Scott, Courtney; Shia, Run-Lie
2000-01-01
This is the final report for work performed between June 1999 through May 2000. The work represents continuation of the previous contract which encompasses five areas: (1) continued refinements and applications of the 2-D chemistry-transport model (CTM) to assess the ozone effects from aircraft operation in the stratosphere; (2) studying the mechanisms that determine the evolution of the sulfur species in the aircraft plume and how such mechanisms affect the way aircraft sulfur emissions should be introduced into global models; (3) the development of diagnostics in the AER 3-wave interactive model to assess the importance of the dynamics feedback and zonal asymmetry in model prediction of ozone response to aircraft operation; (4) the development of a chemistry parameterization scheme in support of the global modeling initiative (GMI); and (5) providing assessment results for preparation of national and international reports which include the "Aviation and the Global Atmosphere" prepared by the Intergovernmental Panel on Climate Change, "Assessment of the effects of high-speed aircraft in the stratosphere: 1998" by NASA, and the "Model and Measurements Intercomparison II" by NASA. Part of the work was reported in the final report. We participated in the SAGE III Ozone Loss and Validation Experiment (SOLVE) campaign and we continue with our analyses of the data.
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Diversity and specificity: auxin perception and signaling through the TIR1/AFB pathway
Wang, Renhou; Estelle, Mark
2014-07-15
Auxin is a versatile plant hormone that plays an essential role in most aspects of plant growth and development. Auxin regulates various growth processes by modulating gene transcription through a SCF TIR1/AFB-Aux/IAA-ARF nuclear signaling module. Recent work has generated clues as to how multiple layers of regulation of the auxin signaling components may result in diverse and specific response outputs. Finally, in particular, interaction and structural studies of key auxin signaling proteins have produced novel insights into the molecular basis of auxin-regulated transcription and may lead to a refined auxin signaling model.
Vortex breakdown simulation - A circumspect study of the steady, laminar, axisymmetric model
NASA Technical Reports Server (NTRS)
Salas, M. D.; Kuruvila, G.
1989-01-01
The incompressible axisymmetric steady Navier-Stokes equations are written using the streamfunction-vorticity formulation. The resulting equations are discretized using a second-order central-difference scheme. The discretized equations are linearized and then solved using an exact LU decomposition, Gaussian elimination, and Newton iteration. Solutions are presented for Reynolds numbers (based on vortex core radius) 100-1800 and swirl parameter 0.9-1.1. The effects of inflow boundary conditions, the location of farfield and outflow boundaries, and mesh refinement are examined. Finally, the stability of the steady solutions is investigated by solving the time-dependent equations.
Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.
McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J
2018-04-01
Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.
Wan, Minghui; Liao, Dongjiang; Peng, Guilin; Xu, Xin; Yin, Weiqiang; Guo, Guixin; Jiang, Funeng; Zhong, Weide
2017-01-01
Chloride intracellular channel 1 (CLIC1) is involved in the development of most aggressive human tumors, including gastric, colon, lung, liver, and glioblastoma cancers. It has become an attractive new therapeutic target for several types of cancer. In this work, we aim to identify natural products as potent CLIC1 inhibitors from Traditional Chinese Medicine (TCM) database using structure-based virtual screening and molecular dynamics (MD) simulation. First, structure-based docking was employed to screen the refined TCM database and the top 500 TCM compounds were obtained and reranked by X-Score. Then, 30 potent hits were achieved from the top 500 TCM compounds using cluster and ligand-protein interaction analysis. Finally, MD simulation was employed to validate the stability of interactions between each hit and CLIC1 protein from docking simulation, and Molecular Mechanics/Generalized Born Surface Area (MM-GBSA) analysis was used to refine the virtual hits. Six TCM compounds with top MM-GBSA scores and ideal-binding models were confirmed as the final hits. Our study provides information about the interaction between TCM compounds and CLIC1 protein, which may be helpful for further experimental investigations. In addition, the top 6 natural products structural scaffolds could serve as building blocks in designing drug-like molecules for CLIC1 inhibition. PMID:29147652
Mehl, S.; Hill, M.C.
2004-01-01
This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.
Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K
2013-04-01
When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. Copyright © 2013 Elsevier Inc. All rights reserved.
Martinez, N E; Johnson, T E; Pinder, J E
2016-01-01
This study compares three anatomical phantoms for rainbow trout (Oncorhynchus mykiss) for the purpose of estimating organ radiation dose and dose rates from molybdenum-99 ((99)Mo) uptake in the liver and GI tract. Model comparison and refinement is important to the process of determining accurate doses and dose rates to the whole body and the various organs. Accurate and consistent dosimetry is crucial to the determination of appropriate dose-effect relationships for use in environmental risk assessment. The computational phantoms considered are (1) a geometrically defined model employing anatomically relevant organ size and location, (2) voxel reconstruction of internal anatomy obtained from CT imaging, and (3) a new model utilizing NURBS surfaces to refine the model in (2). Dose Conversion Factors (DCFs) for whole body as well as selected organs of O. mykiss were computed using Monte Carlo modeling and combined with empirical models for predicting activity concentration to estimate dose rates and ultimately determine cumulative radiation dose (μGy) to selected organs after several half-lives of (99)Mo. The computational models provided similar results, especially for organs that were both the source and target of radiation (less than 30% difference between all models). Values in the empirical model as well as the 14 day cumulative organ doses determined from (99)Mo uptake are compared to similar models developed previously for (131)I. Finally, consideration is given to treating the GI tract as a solid organ compared to partitioning it into gut contents and GI wall, which resulted in an order of magnitude difference in estimated dose for most organs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel
2008-01-01
Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
NASA Astrophysics Data System (ADS)
Majta, J.; Zurek, A. K.; Trujillo, C. P.; Bator, A.
2003-09-01
This work presents validation of the integrated computer model to predict the impact of the microstructure evolution on the mechanical behavior of niobium-microalloyed steels under dynamic loading conditions. The microstructurally based constitutive equations describing the mechanical behavior of the mixed α and γ phases are proposed. It is shown that for a given finishing temperature and strain, the Nb steel exhibits strong influence of strain rate on the flow stress and final structure. This tendency is also observed in calculated results obtained using proposed modeling procedures. High strain rates influence the deformation mechanism and reduce the extent of recovery occurring during and after deformation and, in turn, increase the driving force for transformation. On the other hand, the ratio of nucleation rate to growth rate increases for lower strain rates (due to the higher number of nuclei that can be produced during an extended loading time) leading to the refined ferrite structure. However, as it was expected such behavior produces higher inhomogeneity in the final product. Multistage quasistatic compression tests and test using the Hopkinson Pressure Bar under different temperature, strain, and strain rate conditions, are used for verification of the proposed models.
2009-08-11
This final rule updates the payment rates used under the prospective payment system (PPS) for skilled nursing facilities (SNFs), for fiscal year (FY) 2010. In addition, it recalibrates the case-mix indexes so that they more accurately reflect parity in expenditures related to the implementation of case-mix refinements in January 2006. It also discusses the results of our ongoing analysis of nursing home staff time measurement data collected in the Staff Time and Resource Intensity Verification project, as well as a new Resource Utilization Groups, version 4 case-mix classification model for FY 2011 that will use the updated Minimum Data Set 3.0 resident assessment for case-mix classification. In addition, this final rule discusses the public comments that we have received on these and other issues, including a possible requirement for the quarterly reporting of nursing home staffing data, as well as on applying the quality monitoring mechanism in place for all other SNF PPS facilities to rural swing-bed hospitals. Finally, this final rule revises the regulations to incorporate certain technical corrections.
The PDB_REDO server for macromolecular structure model optimization.
Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis
2014-07-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.
The PDB_REDO server for macromolecular structure model optimization
Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis
2014-01-01
The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallographers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342
Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J
2018-03-01
Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.
Keegan, Ronan M.; McNicholas, Stuart J.; Thomas, Jens M. H.; Simpkin, Adam J.; Uski, Ville; Ballard, Charles C.
2018-01-01
Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case. PMID:29533225
MODFLOW-LGR: Practical application to a large regional dataset
NASA Astrophysics Data System (ADS)
Barnes, D.; Coulibaly, K. M.
2011-12-01
In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
NASA Astrophysics Data System (ADS)
Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.
2016-12-01
The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.
Segmental Refinement: A Multigrid Technique for Data Locality
Adams, Mark F.; Brown, Jed; Knepley, Matt; ...
2016-08-04
In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less
Hoard, C.J.
2010-01-01
The U.S. Geological Survey is evaluating water availability and use within the Great Lakes Basin. This is a pilot effort to develop new techniques and methods to aid in the assessment of water availability. As part of the pilot program, a regional groundwater-flow model for the Lake Michigan Basin was developed using SEAWAT-2000. The regional model was used as a framework for assessing local-scale water availability through grid-refinement techniques. Two grid-refinement techniques, telescopic mesh refinement and local grid refinement, were used to illustrate the capability of the regional model to evaluate local-scale problems. An intermediate model was developed in central Michigan spanning an area of 454 square miles (mi2) using telescopic mesh refinement. Within the intermediate model, a smaller local model covering an area of 21.7 mi2 was developed and simulated using local grid refinement. Recharge was distributed in space and time using a daily output from a modified Thornthwaite-Mather soil-water-balance method. The soil-water-balance method derived recharge estimates from temperature and precipitation data output from an atmosphere-ocean coupled general-circulation model. The particular atmosphere-ocean coupled general-circulation model used, simulated climate change caused by high global greenhouse-gas emissions to the atmosphere. The surface-water network simulated in the regional model was refined and simulated using a streamflow-routing package for MODFLOW. The refined models were used to demonstrate streamflow depletion and potential climate change using five scenarios. The streamflow-depletion scenarios include (1) natural conditions (no pumping), (2) a pumping well near a stream; the well is screened in surficial glacial deposits, (3) a pumping well near a stream; the well is screened in deeper glacial deposits, and (4) a pumping well near a stream; the well is open to a deep bedrock aquifer. Results indicated that a range of 59 to 50 percent of the water pumped originated from the stream for the shallow glacial and deep bedrock pumping scenarios, respectively. The difference in streamflow reduction between the shallow and deep pumping scenarios was compensated for in the deep well by deriving more water from regional sources. The climate-change scenario only simulated natural conditions from 1991-2044, so there was no pumping stress simulated. Streamflows were calculated for the simulated period and indicated that recharge over the period generally increased from the start of the simulation until approximately 2017, and decreased from then to the end of the simulation. Streamflow was highly correlated with recharge so that the lowest streamflows occurred in the later stress periods of the model when recharge was lowest.
Sumner, David M.; Pathak, Chandra S.; Mecikalski, John R.; Paech, Simon J.; Wu, Qinglong; Sangoyomi, Taiye; Babcock, Roger W.; Walton, Raymond
2008-01-01
Solar radiation data are critically important for the estimation of evapotranspiration. Analysis of visible-channel data derived from Geostationary Operational Environmental Satellites (GOES) using radiative transfer modeling has been used to produce spatially- and temporally-distributed datasets of solar radiation. An extensive network of (pyranometer) surface measurements of solar radiation in the State of Florida has allowed refined calibration of a GOES-derived daily integrated radiation data product. This refinement of radiation data allowed for corrections of satellite sensor drift, satellite generational change, and consideration of the highly-variable cloudy conditions that are typical of Florida. To aid in calibration of a GOES-derived radiation product, solar radiation data for the period 1995–2004 from 58 field stations that are located throughout the State were compiled. The GOES radiation product was calibrated by way of a three-step process: 1) comparison with ground-based pyranometer measurements on clear reference days, 2) correcting for a bias related to cloud cover, and 3) deriving month-by-month bias correction factors. Pre-calibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m–2 day–1 (13 percent). Calibration reduced errors to 1.7 MJ m–2 day–1 (10 percent) and also removed time- and cloudiness-related biases. The final dataset has been used to produce Statewide evapotranspiration estimates.
A systematic approach to embedded biomedical decision making.
Song, Zhe; Ji, Zhongkai; Ma, Jian-Guo; Sputh, Bernhard; Acharya, U Rajendra; Faust, Oliver
2012-11-01
An embedded decision making is a key feature for many biomedical systems. In most cases human life directly depends on correct decisions made by these systems, therefore they have to work reliably. This paper describes how we applied systems engineering principles to design a high performance embedded classification system in a systematic and well structured way. We introduce the structured design approach by discussing requirements capturing, specifications refinement, implementation and testing. Thereby, we follow systems engineering principles and execute each of these processes as formal as possible. The requirements, which motivate the system design, describe an automated decision making system for diagnostic support. These requirements are refined into the implementation of a support vector machine (SVM) algorithm which enables us to integrate automated decision making in embedded systems. With a formal model we establish functionality, stability and reliability of the system. Furthermore, we investigated different parallel processing configurations of this computationally complex algorithm. We found that, by adding SVM processes, an almost linear speedup is possible. Once we established these system properties, we translated the formal model into an implementation. The resulting implementation was tested using XMOS processors with both normal and failure cases, to build up trust in the implementation. Finally, we demonstrated that our parallel implementation achieves the speedup, predicted by the formal model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
Disruptive technologies and transportation : final report.
DOT National Transportation Integrated Search
2016-06-01
Disruptive technologies refer to innovations that, at first, may be considered unproven, lacking refinement, relatively unknown, or even impractical, but ultimately they supplant existing technologies and/or applications. In general, disruptive techn...
Adaptive rood pattern search for fast block-matching motion estimation.
Nie, Yao; Ma, Kai-Kuang
2002-01-01
In this paper, we propose a novel and simple fast block-matching algorithm (BMA), called adaptive rood pattern search (ARPS), which consists of two sequential search stages: 1) initial search and 2) refined local search. For each macroblock (MB), the initial search is performed only once at the beginning in order to find a good starting point for the follow-up refined local search. By doing so, unnecessary intermediate search and the risk of being trapped into local minimum matching error points could be greatly reduced in long search case. For the initial search stage, an adaptive rood pattern (ARP) is proposed, and the ARP's size is dynamically determined for each MB, based on the available motion vectors (MVs) of the neighboring MBs. In the refined local search stage, a unit-size rood pattern (URP) is exploited repeatedly, and unrestrictedly, until the final MV is found. To further speed up the search, zero-motion prejudgment (ZMP) is incorporated in our method, which is particularly beneficial to those video sequences containing small motion contents. Extensive experiments conducted based on the MPEG-4 Verification Model (VM) encoding platform show that the search speed of our proposed ARPS-ZMP is about two to three times faster than that of the diamond search (DS), and our method even achieves higher peak signal-to-noise ratio (PSNR) particularly for those video sequences containing large and/or complex motion contents.
REFINEMENT OF A MODEL TO PREDICT THE PERMEATION OF PROTECTIVE CLOTHING MATERIALS
A prototype of a predictive model for estimating chemical permeation through protective clothing materials was refined and tested. he model applies Fickian diffusion theory and predicts permeation rates and cumulative permeation as a function of time for five materials: butyl rub...
Macromolecular refinement by model morphing using non-atomic parameterizations.
Cowtan, Kevin; Agirre, Jon
2018-02-01
Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.
A semi-implicit finite element method for viscous lipid membranes
NASA Astrophysics Data System (ADS)
Rodrigues, Diego S.; Ausas, Roberto F.; Mut, Fernando; Buscaglia, Gustavo C.
2015-10-01
A finite element formulation to approximate the behavior of lipid membranes is proposed. The mathematical model incorporates tangential viscous stresses and bending elastic forces, together with the inextensibility constraint and the enclosed volume constraint. The membrane is discretized by a surface mesh made up of planar triangles, over which a mixed formulation (velocity-curvature) is built based on the viscous bilinear form (Boussinesq-Scriven operator) and the Laplace-Beltrami identity relating position and curvature. A semi-implicit approach is then used to discretize in time, with piecewise linear interpolants for all variables. Two stabilization terms are needed: The first one stabilizes the inextensibility constraint by a pressure-gradient-projection scheme (Codina and Blasco (1997) [33]), the second couples curvature and velocity to improve temporal stability, as proposed by Bänsch (2001) [36]. The volume constraint is handled by a Lagrange multiplier (which turns out to be the internal pressure), and an analogous strategy is used to filter out rigid-body motions. The nodal positions are updated in a Lagrangian manner according to the velocity solution at each time step. An automatic remeshing strategy maintains suitable refinement and mesh quality throughout the simulation. Numerical experiments show the convergent and robust behavior of the proposed method. Stability limits are obtained from numerous relaxation tests, and convergence with mesh refinement is confirmed both in the relaxation transient and in the final equilibrium shape. Virtual tweezing experiments are also reported, computing the dependence of the deformed membrane shape with the tweezing velocity (a purely dynamical effect). For sufficiently high velocities, a tether develops which shows good agreement, both in its final radius and in its transient behavior, with available analytical solutions. Finally, simulation results of a membrane subject to the simultaneous action of six tweezers illustrate the robustness of the method.
3D Reconstruction and Approximation of Vegetation Geometry for Modeling of Within-canopy Flows
NASA Astrophysics Data System (ADS)
Henderson, S. M.; Lynn, K.; Lienard, J.; Strigul, N.; Mullarney, J. C.; Norris, B. K.; Bryan, K. R.
2016-02-01
Aquatic vegetation can shelter coastlines from waves and currents, sometimes resulting in accretion of fine sediments. We developed a photogrammetric technique for estimating the key geometric vegetation parameters that are required for modeling of within-canopy flows. Accurate estimates of vegetation geometry and density are essential to refine hydrodynamic models, but accurate, convenient, and time-efficient methodologies for measuring complex canopy geometries have been lacking. The novel approach presented here builds on recent progress in photogrammetry and computer vision. We analyzed the geometry of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Although comparatively thin, pneumatophores are more numerous than mangrove trunks, and thus influence near bed flow and sediment transport. Quadrats (1 m2) were placed at low tide among pneumatophores. Roots were counted and measured for height and diameter. Photos were taken from multiple angles around each quadrat. Relative camera locations and orientations were estimated from key features identified in multiple images using open-source software (VisualSfM). Next, a dense 3D point cloud was produced. Finally, algorithms were developed for automated estimation of pneumatophore geometry from the 3D point cloud. We found good agreement between hand-measured and photogrammetric estimates of key geometric parameters, including mean stem diameter, total number of stems, and frontal area density. These methods can reduce time spent measuring in the field, thereby enabling future studies to refine models of water flows and sediment transport within heterogenous vegetation canopies.
Coloured Petri Net Refinement Specification and Correctness Proof with Coq
NASA Technical Reports Server (NTRS)
Choppy, Christine; Mayero, Micaela; Petrucci, Laure
2009-01-01
In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.
Aydoğdu, A; Frasca, P; D'Apice, C; Manzo, R; Thornton, J M; Gachomo, B; Wilson, T; Cheung, B; Tariq, U; Saidel, W; Piccoli, B
2017-02-21
In this paper we introduce a mathematical model to study the group dynamics of birds resting on wires. The model is agent-based and postulates attraction-repulsion forces between the interacting birds: the interactions are "topological", in the sense that they involve a given number of neighbors irrespective of their distance. The model is first mathematically analyzed and then simulated to study its main properties: we observe that the model predicts birds to be more widely spaced near the borders of each group. We compare the results from the model with experimental data, derived from the analysis of pictures of pigeons and starlings taken in New Jersey: two different image elaboration protocols allow us to establish a good agreement with the model and to quantify its main parameters. We also discuss the potential handedness of the birds, by analyzing the group organization features and the group dynamics at the arrival of new birds. Finally, we propose a more refined mathematical model that describes landing and departing birds by suitable stochastic processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Correcting pervasive errors in RNA crystallography through enumerative structure prediction.
Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju
2013-01-01
Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.
Hirshfeld atom refinement for modelling strong hydrogen bonds.
Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon
2014-09-01
High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.
The solvent component of macromolecular crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weichenberger, Christian X.; Afonine, Pavel V.; Kantardjieff, Katherine
2015-04-30
On average, the mother liquor or solvent and its constituents occupy about 50% of a macromolecular crystal. Ordered as well as disordered solvent components need to be accurately accounted for in modelling and refinement, often with considerable complexity. The mother liquor from which a biomolecular crystal is grown will contain water, buffer molecules, native ligands and cofactors, crystallization precipitants and additives, various metal ions, and often small-molecule ligands or inhibitors. On average, about half the volume of a biomolecular crystal consists of this mother liquor, whose components form the disordered bulk solvent. Its scattering contributions can be exploited in initialmore » phasing and must be included in crystal structure refinement as a bulk-solvent model. Concomitantly, distinct electron density originating from ordered solvent components must be correctly identified and represented as part of the atomic crystal structure model. Herein, are reviewed (i) probabilistic bulk-solvent content estimates, (ii) the use of bulk-solvent density modification in phase improvement, (iii) bulk-solvent models and refinement of bulk-solvent contributions and (iv) modelling and validation of ordered solvent constituents. A brief summary is provided of current tools for bulk-solvent analysis and refinement, as well as of modelling, refinement and analysis of ordered solvent components, including small-molecule ligands.« less
Subbotina, Julia; Yarov-Yarovoy, Vladimir; Lees-Miller, James; Durdagi, Serdar; Guo, Jiqing; Duff, Henry J; Noskov, Sergei Yu
2010-11-01
The hERG1 gene (Kv11.1) encodes a voltage-gated potassium channel. Mutations in this gene lead to one form of the Long QT Syndrome (LQTS) in humans. Promiscuous binding of drugs to hERG1 is known to alter the structure/function of the channel leading to an acquired form of the LQTS. Expectably, creation and validation of reliable 3D model of the channel have been a key target in molecular cardiology and pharmacology for the last decade. Although many models were built, they all were limited to pore domain. In this work, a full model of the hERG1 channel is developed which includes all transmembrane segments. We tested a template-driven de-novo design with ROSETTA-membrane modeling using side-chain placements optimized by subsequent molecular dynamics (MD) simulations. Although backbone templates for the homology modeled parts of the pore and voltage sensors were based on the available structures of KvAP, Kv1.2 and Kv1.2-Kv2.1 chimera channels, the missing parts are modeled de-novo. The impact of several alignments on the structure of the S4 helix in the voltage-sensing domain was also tested. Herein, final models are evaluated for consistency to the reported structural elements discovered mainly on the basis of mutagenesis and electrophysiology. These structural elements include salt bridges and close contacts in the voltage-sensor domain; and the topology of the extracellular S5-pore linker compared with that established by toxin foot-printing and nuclear magnetic resonance studies. Implications of the refined hERG1 model to binding of blockers and channels activators (potent new ligands for channel activations) are discussed. © 2010 Wiley-Liss, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... Tube From the People's Republic of China and Mexico: Initiation of Antidumping Duty Investigations, 74... purchases by Golden Dragon from a certain supplier in the Peoples's Republic of China as non-market economy... and Tube From the People's Republic of China: Final Determination of Sales at Less Than Fair Value...
Cho, Hyun; Kwon, Min; Choi, Ji-Hye; Lee, Sang-Kyu; Choi, Jung Seok; Choi, Sam-Wook; Kim, Dai-Jin
2014-09-01
This study was conducted to develop and validate a standardized self-diagnostic Internet addiction (IA) scale based on the diagnosis criteria for Internet Gaming Disorder (IGD) in the Diagnostic and Statistical Manual of Mental Disorder, 5th edition (DSM-5). Items based on the IGD diagnosis criteria were developed using items of the previous Internet addiction scales. Data were collected from a community sample. The data were divided into two sets, and confirmatory factor analysis (CFA) was performed repeatedly. The model was modified after discussion with professionals based on the first CFA results, after which the second CFA was performed. The internal consistency reliability was generally good. The items that showed significantly low correlation values based on the item-total correlation of each factor were excluded. After the first CFA was performed, some factors and items were excluded. Seven factors and 26 items were prepared for the final model. The second CFA results showed good general factor loading, Squared Multiple Correlation (SMC) and model fit. The model fit of the final model was good, but some factors were very highly correlated. It is recommended that some of the factors be refined through further studies. Copyright © 2014. Published by Elsevier Ltd.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
The calibration and flight test performance of the space shuttle orbiter air data system
NASA Technical Reports Server (NTRS)
Dean, A. S.; Mena, A. L.
1983-01-01
The Space Shuttle air data system (ADS) is used by the guidance, navigation and control system (GN&C) to guide the vehicle to a safe landing. In addition, postflight aerodynamic analysis requires a precise knowledge of flight conditions. Since the orbiter is essentially an unpowered vehicle, the conventional methods of obtaining the ADS calibration were not available; therefore, the calibration was derived using a unique and extensive wind tunnel test program. This test program included subsonic tests with a 0.36-scale orbiter model, transonic and supersonic tests with a smaller 0.2-scale model, and numerous ADS probe-alone tests. The wind tunnel calibration was further refined with subsonic results from the approach and landing test (ALT) program, thus producing the ADS calibration for the orbital flight test (OFT) program. The calibration of the Space Shuttle ADS and its performance during flight are discussed in this paper. A brief description of the system is followed by a discussion of the calibration methodology, and then by a review of the wind tunnel and flight test programs. Finally, the flight results are presented, including an evaluation of the system performance for on-board systems use and a description of the calibration refinements developed to provide the best possible air data for postflight analysis work.
Gasoline and Diesel Fuel Test Methods Additional Resources
Supporting documents on the Direct Final Rule that allows refiners and laboratories to use more current and improved fuel testing procedures for twelve American Society for Testing and Materials analytical test methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madland, D. G.; Kahler, A. C.
This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less
Refined crystal structure of DsRed, a red fluorescent protein from coral, at 2.0-A resolution.
Yarbrough, D; Wachter, R M; Kallio, K; Matz, M V; Remington, S J
2001-01-16
The crystal structure of DsRed, a red fluorescent protein from a corallimorpharian, has been determined at 2.0-A resolution by multiple-wavelength anomalous dispersion and crystallographic refinement. Crystals of the selenomethionine-substituted protein have space group P2(1) and contain a tetramer with 222 noncrystallographic symmetry in the asymmetric unit. The refined model has satisfactory stereochemistry and a final crystallographic R factor of 0.162. The protein, which forms an obligatory tetramer in solution and in the crystal, is a squat rectangular prism comprising four protomers whose fold is extremely similar to that of the Aequorea victoria green fluorescent protein despite low ( approximately 23%) amino acid sequence homology. The monomer consists of an 11-stranded beta barrel with a coaxial helix. The chromophores, formed from the primary sequence -Gln-Tyr-Gly- (residues 66-68), are arranged in a approximately 27 x 34-A rectangular array in two approximately antiparallel pairs. The geometry at the alpha carbon of Gln-66 (refined without stereochemical restraints) is consistent with an sp(2) hybridized center, in accord with the proposal that red fluorescence is because of an additional oxidation step that forms an acylimine extension to the chromophore [Gross, L. A., Baird, G. S., Hoffman, R. C., Baldridge, K. K. & Tsien, R. Y. (2000) Proc. Natl. Acad. Sci. USA 87, 11990-11995]. The carbonyl oxygen of Phe-65 is almost 90 degrees out of the plane of the chromophore, consistent with theoretical calculations suggesting that this is the minimum energy conformation of this moiety despite the conjugation of this group with the rest of the chromophore.
Damani, Zaheed; MacKean, Gail; Bohm, Eric; DeMone, Brie; Wright, Brock; Noseworthy, Tom; Holroyd-Leduc, Jayna; Marshall, Deborah A
2016-10-18
Policy dialogues are critical for developing responsive, effective, sustainable, evidence-informed policy. Our multidisciplinary team, including researchers, physicians and senior decision-makers, comprehensively evaluated The Winnipeg Central Intake Service, a single-entry model in Winnipeg, Manitoba, to improve patient access to hip/knee replacement surgery. We used the evaluation findings to develop five evidence-informed policy directions to help improve access to scheduled clinical services across Manitoba. Using guiding principles of public participation processes, we hosted a policy roundtable meeting to engage stakeholders and use their input to refine the policy directions. Here, we report on the use and input of a policy roundtable meeting and its role in contributing to the development of evidence-informed policy. Our evidence-informed policy directions focused on formal measurement/monitoring of quality, central intake as a preferred model for service delivery, provincial scope, transparent processes/performance indicators, and patient choice of provider. We held a policy roundtable meeting and used outcomes of facilitated discussions to refine these directions. Individuals from our team and six stakeholder groups across Manitoba participated (n = 44), including patients, family physicians, orthopaedic surgeons, surgical office assistants, Winnipeg Central Intake team, and administrators/managers. We developed evaluation forms to assess the meeting process, and collected decision-maker partners' perspectives on the value of the policy roundtable meeting and use of policy directions to improve access to scheduled clinical services after the meeting, and again 15 months later. We analyzed roundtable and evaluation data using thematic analysis to identify key themes. Four key findings emerged. First, participants supported all policy directions, with revisions and key implementation considerations identified. Second, participants felt the policy roundtable meeting achieved its purpose (to engage stakeholders, elicit feedback, refine policy directions). Third, our decision-maker partners' expectations of the policy roundtable meeting were exceeded; they re-affirmed its value and described the refined policy directions as foundational to establishing the vocabulary, vision and framework for improving access to scheduled clinical services in Manitoba. Finally, our adaptation of key design elements was conducive to discussion of issues surrounding access to care. Our policy roundtable process was an effective tool for acquiring broad input from stakeholders, refining policy directions and forming the necessary consensus starting points to move towards evidence-informed policy.
NASA Astrophysics Data System (ADS)
Zhang, Chuanyou; Wang, Qian; Sun, Yu; Wang, Huibin; Zhang, Wei; Wang, Qingfeng; Guo, Aimin; Sun, Kaiming
Extensive investigations of metallurgical roles played by Nb microalloying in advanced products of seamless steel tube have been carried out. The results show that with Nb microalloyed , the recrystallized austenite grain (RAG) and final ferrite grain of tubular steel are evidently refined even experiencing a piercing and a continuous rolling at very high temperature, and a certain quantity of (Nb,V)(C,N) and (Ti,Nb,V)(C,N) particles form on air cooling. Moreover, for quenching (Q) & tempering (T) treated tubular steels, the nanoscale particles of (Nb,V) (C,N) further precipitate on heating stage of Q at 900-1000°C, leading to a significant refinement of prior austenite grain (PAG) and final martensitic or bainitic packet/block structures, and during subsequent T at 600-700°C, producing an improved resistance to softening.
Baldwin, Austin K.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher
2013-01-01
In 2008, the U.S. Geological Survey and the Milwaukee Metropolitan Sewerage District initiated a study to develop regression models to estimate real-time concentrations and loads of chloride, suspended solids, phosphorus, and bacteria in streams near Milwaukee, Wisconsin. To collect monitoring data for calibration of models, water-quality sensors and automated samplers were installed at six sites in the Menomonee River drainage basin. The sensors continuously measured four potential explanatory variables: water temperature, specific conductance, dissolved oxygen, and turbidity. Discrete water-quality samples were collected and analyzed for five response variables: chloride, total suspended solids, total phosphorus, Escherichia coli bacteria, and fecal coliform bacteria. Using the first year of data, regression models were developed to continuously estimate the response variables on the basis of the continuously measured explanatory variables. Those models were published in a previous report. In this report, those models are refined using 2 years of additional data, and the relative improvement in model predictability is discussed. In addition, a set of regression models is presented for a new site in the Menomonee River Basin, Underwood Creek at Wauwatosa. The refined models use the same explanatory variables as the original models. The chloride models all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity. Total suspended solids and total phosphorus models used turbidity as the only explanatory variable, and bacteria models used water temperature and turbidity as explanatory variables. An analysis of covariance (ANCOVA), used to compare the coefficients in the original models to those in the refined models calibrated using all of the data, showed that only 3 of the 25 original models changed significantly. Root-mean-squared errors (RMSEs) calculated for both the original and refined models using the entire dataset showed a median improvement in RMSE of 2.1 percent, with a range of 0.0–13.9 percent. Therefore most of the original models did almost as well at estimating concentrations during the validation period (October 2009–September 2011) as the refined models, which were calibrated using those data. Application of these refined models can produce continuously estimated concentrations of chloride, total suspended solids, total phosphorus, E. coli bacteria, and fecal coliform bacteria that may assist managers in quantifying the effects of land-use changes and improvement projects, establish total maximum daily loads, and enable better informed decision making in the future.
Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study
NASA Astrophysics Data System (ADS)
Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.
2008-06-01
Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomov, I; Pember, R; Greenough, J
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less
Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.
2016-01-01
A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221
A Conceptual Model of Career Development to Enhance Academic Motivation
ERIC Educational Resources Information Center
Collins, Nancy Creighton
2010-01-01
The purpose of this study was to develop, refine, and validate a conceptual model of career development to enhance the academic motivation of community college students. To achieve this end, a straw model was built from the theoretical and empirical research literature. The model was then refined and validated through three rounds of a Delphi…
NASA Astrophysics Data System (ADS)
Sugawara, Kento; Sugimoto, Kunihisa; Fujii, Tatsuya; Higuchi, Takafumi; Katayama, Naoyuki; Okamoto, Yoshihiko; Sawa, Hiroshi
2018-02-01
The distribution of d-orbital valence electrons in volborthite [Cu3V2O7(OH)2 • 2H2O] was investigated by charge density analysis of the multipole model refinement. Diffraction data were obtained by synchrotron radiation single-crystal X-ray diffraction experiments. Data reduction by detwinning of the multiple structural domains was performed using our developed software. In this study, using high-quality data, we demonstrated that the water molecules in volborthite can be located by the hydrogen bonding in cavities that consist of Kagome lattice layers of CuO4(OH)2 and pillars of V2O7. Final multipole refinements before and after the structural phase transition directly visualized the deformation electron density of the valence electrons. We successfully directly visualized the orbital flipping of the d-orbital dx2-y2, which is the highest level of 3d orbitals occupied by d9 electrons in volborthite. The developed techniques and software can be employed for investigations of structural properties of systems with multiple structural domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broderick, Robert Joseph; Quiroz, Jimmy Edward; Reno, Matthew J.
2015-11-01
The third solicitation of the California Solar Initiative (CSI) Research, Development, Demonstration and Deployment (RD&D) Program established by the California Public Utility Commission (CPUC) is supporting the Electric Power Research Institute (EPRI), National Renewable Energy Laboratory (NREL), and Sandia National Laboratories (SNL) with collaboration from Pacific Gas and Electric (PG&E), Southern California Edison (SCE), and San Diego Gas and Electric (SDG&E), in research to improve the Utility Application Review and Approval process for interconnecting distributed energy resources to the distribution system. Currently this process is the most time - consuming of any step on the path to generating power onmore » the distribution system. This CSI RD&D solicitation three project has completed the tasks of collecting data from the three utilities, clustering feeder characteristic data to attain representative feeders, detailed modeling of 16 representative feeders, analysis of PV impacts to those feeders, refinement of current screening processes, and validation of those suggested refinements. In this report each task is summarized to produce a final summary of all components of the overall project.« less
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
NASA Astrophysics Data System (ADS)
Bowles, C.
2013-12-01
Ecological engineering, or eco engineering, is an emerging field in the study of integrating ecology and engineering, concerned with the design, monitoring, and construction of ecosystems. According to Mitsch (1996) 'the design of sustainable ecosystems intends to integrate human society with its natural environment for the benefit of both'. Eco engineering emerged as a new idea in the early 1960s, and the concept has seen refinement since then. As a commonly practiced field of engineering it is relatively novel. Howard Odum (1963) and others first introduced it as 'utilizing natural energy sources as the predominant input to manipulate and control environmental systems'. Mtisch and Jorgensen (1989) were the first to define eco engineering, to provide eco engineering principles and conceptual eco engineering models. Later they refined the definition and increased the number of principles. They suggested that the goals of eco engineering are: a) the restoration of ecosystems that have been substantially disturbed by human activities such as environmental pollution or land disturbance, and b) the development of new sustainable ecosystems that have both human and ecological values. Here a more detailed overview of eco engineering is provided, particularly with regard to how engineers and ecologists are utilizing multi-dimensional computational models to link ecology and engineering, resulting in increasingly successful project implementation. Descriptions are provided pertaining to 1-, 2- and 3-dimensional hydrodynamic models and their use at small- and large-scale applications. A range of conceptual models that have been developed to aid the in the creation of linkages between ecology and engineering are discussed. Finally, several case studies that link ecology and engineering via computational modeling are provided. These studies include localized stream rehabilitation, spawning gravel enhancement on a large river system, and watershed-wide floodplain modeling of the Sacramento River Valley.
Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok
2017-03-01
Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Juher, David; Saldaña, Joan
2018-03-01
We study the properties of the potential overlap between two networks A ,B sharing the same set of N nodes (a two-layer network) whose respective degree distributions pA(k ) ,pB(k ) are given. Defining the overlap coefficient α as the Jaccard index, we prove that α is very close to 0 when A and B are random and independently generated. We derive an upper bound αM for the maximum overlap coefficient permitted in terms of pA(k ) , pB(k ) , and N . Then we present an algorithm based on cross rewiring of links to obtain a two-layer network with any prescribed α inside the range (0 ,αM) . A refined version of the algorithm allows us to minimize the cross-layer correlations that unavoidably appear for values of α beyond a critical overlap αc<αM . Finally, we present a very simple example of a susceptible-infectious-recovered epidemic model with information dissemination and use the algorithms to determine the impact of the overlap on the final outbreak size predicted by the model.
Development and initial validation of a content taxonomy for patient records in general dentistry
Acharya, Amit; Hernandez, Pedro; Thyvalikakath, Thankam; Ye, Harold; Song, Mei; Schleyer, Titus
2013-01-01
Objective Develop and validate an initial content taxonomy for patient records in general dentistry. Methods Phase 1–Obtain 95 de-identified patient records from 11 general dentists in the United States. Phase 2–Extract individual data fields (information items), both explicit (labeled) and implicit (unlabeled), from records, and organize into categories mirroring original field context. Phase 3–Refine raw list of information items by eliminating duplicates/redundancies and focusing on general dentistry. Phase 4–Validate all items regarding inclusion and importance using a two-round Delphi study with a panel of 22 general dentists active in clinical practice, education, and research. Results Analysis of 76 patient records from 9 dentists, combined with previous work, yielded a raw list of 1,509 information items. Refinement reduced this list to 1,107 items, subsequently rated by the Delphi panel. The final model contained 870 items, with 761 (88%) rated as mandatory. In Round 1, 95% (825) of the final items were accepted, in Round 2 the remaining 5% (45). Only 45 items on the initial list were rejected and 192 (or 17%) remained equivocal. Conclusion Grounded in the reality of clinical practice, our proposed content taxonomy represents a significant advance over existing guidelines and standards by providing a granular and comprehensive information representation for general dental patient records. It offers a significant foundational asset for implementing an interoperable health information technology infrastructure for general dentistry. PMID:23838618
NASA Astrophysics Data System (ADS)
Burov, V. A.; Zotov, D. I.; Rumyantseva, O. D.
2014-07-01
A two-step algorithm is used to reconstruct the spatial distributions of the acoustic characteristics of soft biological tissues-the sound velocity and absorption coefficient. Knowing these distributions is urgent for early detection of benign and malignant neoplasms in biological tissues, primarily in the breast. At the first stage, large-scale distributions are estimated; at the second step, they are refined with a high resolution. Results of reconstruction on the base of model initial data are presented. The principal necessity of preliminary reconstruction of large-scale distributions followed by their being taken into account at the second step is illustrated. The use of CUDA technology for processing makes it possible to obtain final images of 1024 × 1024 samples in only a few minutes.
ERIC Educational Resources Information Center
Gardner, David C.; Beatty, Grace Joely
Within the context of the major objectives of developing, field testing, and refining the curriculum materials described in volume 1 of this final report (CE 024 117), Volume 2 describes and critiques the management system used by Project HIRE in that development process. (See Note for availability of curriculum materials.) Chapter 1 introduces…
ERIC Educational Resources Information Center
Gardner, David C.; And Others
Volume 1 of the final report on Project HIRE reports the design, development, field-testing, and refining of self-instructional packages to teach entry level technical vocabulary to learning handicapped students mainstreamed in vocational programs. Volume 2, a management handbook, reports the methods and findings concerning development of…
Joint Planning and Development Office Work Plan FY10
2010-01-01
IPSA ) Division will make refinements to the NextGen Portfolio Analysis. In addition, IPSA will work with the Department of Defense (DoD) to define and...Submitted Interagency Portfolio and Systems Analysis ( IPSA ) DRAFT DoD Portfolio Analysis Criteria BASELINE DoD Portfolio Analysis Criteria DRAFT...WG Work Plan Review Prototype Capability Selected and Defined CHAs Complete Safety Metrics for IPSA Complete FINAL Prototype Report FINAL
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
An Export-Marketing Model for Pharmaceutical Firms (The Case of Iran)
Mohammadzadeh, Mehdi; Aryanpour, Narges
2013-01-01
Internationalization is a matter of committed decision-making that starts with export marketing, in which an organization tries to diagnose and use opportunities in target markets based on realistic evaluation of internal strengths and weaknesses with analysis of macro and microenvironments in order to gain presence in other countries. A developed model for export and international marketing of pharmaceutical companies is introduced. The paper reviews common theories of the internationalization process, followed by examining different methods and models for assessing preparation for export activities and examining conceptual model based on a single case study method on a basket of seven leading domestic firms by using mainly questionares as the data gathering tool along with interviews for bias reduction. Finally, in keeping with the study objectives, the special aspects of the pharmaceutical marketing environment have been covered, revealing special dimensions of pharmaceutical marketing that have been embedded within the appropriate base model. The new model for international activities of pharmaceutical companies was refined by expert opinions extracted from result of questionnaires. PMID:24250597
An export-marketing model for pharmaceutical firms (the case of iran).
Mohammadzadeh, Mehdi; Aryanpour, Narges
2013-01-01
Internationalization is a matter of committed decision-making that starts with export marketing, in which an organization tries to diagnose and use opportunities in target markets based on realistic evaluation of internal strengths and weaknesses with analysis of macro and microenvironments in order to gain presence in other countries. A developed model for export and international marketing of pharmaceutical companies is introduced. The paper reviews common theories of the internationalization process, followed by examining different methods and models for assessing preparation for export activities and examining conceptual model based on a single case study method on a basket of seven leading domestic firms by using mainly questionares as the data gathering tool along with interviews for bias reduction. Finally, in keeping with the study objectives, the special aspects of the pharmaceutical marketing environment have been covered, revealing special dimensions of pharmaceutical marketing that have been embedded within the appropriate base model. The new model for international activities of pharmaceutical companies was refined by expert opinions extracted from result of questionnaires.
Clewell, H J
1993-05-01
The use of in vitro data to support the development of physiologically based pharmacokinetic (PBPK) models and to reduce the requirement for in vivo testing is demonstrated by three examples. In the first example, polychlorotrifluoroethylene, in vitro studies comparing metabolism and tissue response in rodents and primates made it possible to obtain definitive data for a human risk assessment without resorting to additional in vivo studies with primates. In the second example, a PBPK model for organophosphate esters was developed in which the parameters defining metabolism, tissue partitioning, and enzyme inhibition were all characterized by in vitro studies, and the rest of the model parameters were established from the literature. The resulting model was able to provide a coherent description of enzyme inhibition following both acute and chronic exposures in mice, rats, and humans. In the final example, the carcinogenic risk assessment for methylene chloride was refined by the incorporation of in vitro data on human metabolism into a PBPK model.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-03-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-06-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Symmetry breaking in tensor models
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Gurau, Razvan
2015-11-01
In this paper we analyze a quartic tensor model with one interaction for a tensor of arbitrary rank. This model has a critical point where a continuous limit of infinitely refined random geometries is reached. We show that the critical point corresponds to a phase transition in the tensor model associated to a breaking of the unitary symmetry. We analyze the model in the two phases and prove that, in a double scaling limit, the symmetric phase corresponds to a theory of infinitely refined random surfaces, while the broken phase corresponds to a theory of infinitely refined random nodal surfaces. At leading order in the double scaling limit planar surfaces dominate in the symmetric phase, and planar nodal surfaces dominate in the broken phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, Aaron R.
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less
AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT
A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...
Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B
NASA Technical Reports Server (NTRS)
Yeganefard, Sanaz; Butler, Michael; Rezazadeh, Abdolbaghi
2010-01-01
Recently a set of guidelines, or cookbook, has been developed for modelling and refinement of control problems in Event-B. The Event-B formal method is used for system-level modelling by defining states of a system and events which act on these states. It also supports refinement of models. This cookbook is intended to systematize the process of modelling and refining a control problem system by distinguishing environment, controller and command phenomena. Our main objective in this paper is to investigate and evaluate the usefulness and effectiveness of this cookbook by following it throughout the formal modelling of cruise control system found in cars. The outcomes are identifying the benefits of the cookbook and also giving guidance to its future users.
Application of the Refined Zigzag Theory to the Modeling of Delaminations in Laminated Composites
NASA Technical Reports Server (NTRS)
Groh, Rainer M. J.; Weaver, Paul M.; Tessler, Alexander
2015-01-01
The Refined Zigzag Theory is applied to the modeling of delaminations in laminated composites. The commonly used cohesive zone approach is adapted for use within a continuum mechanics model, and then used to predict the onset and propagation of delamination in five cross-ply composite beams. The resin-rich area between individual composite plies is modeled explicitly using thin, discrete layers with isotropic material properties. A damage model is applied to these resin-rich layers to enable tracking of delamination propagation. The displacement jump across the damaged interfacial resin layer is captured using the zigzag function of the Refined Zigzag Theory. The overall model predicts the initiation of delamination to within 8% compared to experimental results and the load drop after propagation is represented accurately.
The Collaborative Seismic Earth Model: Generation 1
NASA Astrophysics Data System (ADS)
Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner
2018-05-01
We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.
Pulse Jet Mixing Tests With Noncohesive Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.
2012-02-17
This report summarizes results from pulse jet mixing (PJM) tests with noncohesive solids in Newtonian liquid. The tests were conducted during FY 2007 and 2008 to support the design of mixing systems for the Hanford Waste Treatment and Immobilization Plant (WTP). Tests were conducted at three geometric scales using noncohesive simulants, and the test data were used to develop models predicting two measures of mixing performance for full-scale WTP vessels. The models predict the cloud height (the height to which solids will be lifted by the PJM action) and the critical suspension velocity (the minimum velocity needed to ensure allmore » solids are suspended off the floor, though not fully mixed). From the cloud height, the concentration of solids at the pump inlet can be estimated. The predicted critical suspension velocity for lifting all solids is not precisely the same as the mixing requirement for 'disturbing' a sufficient volume of solids, but the values will be similar and closely related. These predictive models were successfully benchmarked against larger scale tests and compared well with results from computational fluid dynamics simulations. The application of the models to assess mixing in WTP vessels is illustrated in examples for 13 distinct designs and selected operational conditions. The values selected for these examples are not final; thus, the estimates of performance should not be interpreted as final conclusions of design adequacy or inadequacy. However, this work does reveal that several vessels may require adjustments to design, operating features, or waste feed properties to ensure confidence in operation. The models described in this report will prove to be valuable engineering tools to evaluate options as designs are finalized for the WTP. Revision 1 refines data sets used for model development and summarizes models developed since the completion of Revision 0.« less
Validating neural-network refinements of nuclear mass models
NASA Astrophysics Data System (ADS)
Utama, R.; Piekarewicz, J.
2018-01-01
Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
...Mobil Refining and Supply Company-- Beaumont Refinery, published on October 1, 2010. DATES: Effective...: Because EPA received adverse comments, we are withdrawing the direct final exclusion for ExxonMobil...
The Multidimensional Assessment of Interoceptive Awareness (MAIA)
Mehling, Wolf E.; Price, Cynthia; Daubenmier, Jennifer J.; Acree, Mike; Bartmess, Elizabeth; Stewart, Anita
2012-01-01
This paper describes the development of a multidimensional self-report measure of interoceptive body awareness. The systematic mixed-methods process involved reviewing the current literature, specifying a multidimensional conceptual framework, evaluating prior instruments, developing items, and analyzing focus group responses to scale items by instructors and patients of body awareness-enhancing therapies. Following refinement by cognitive testing, items were field-tested in students and instructors of mind-body approaches. Final item selection was achieved by submitting the field test data to an iterative process using multiple validation methods, including exploratory cluster and confirmatory factor analyses, comparison between known groups, and correlations with established measures of related constructs. The resulting 32-item multidimensional instrument assesses eight concepts. The psychometric properties of these final scales suggest that the Multidimensional Assessment of Interoceptive Awareness (MAIA) may serve as a starting point for research and further collaborative refinement. PMID:23133619
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, L.E.; McGuire, D.R.
1984-05-01
This final report summarizes the technical reports for Phase III of this project. The third phase included the operation, maintenance, upgrade and performance reporting of a 10,080 square foot Solar Industrial Process Heat System installed at the Famariss Energy Refinery of Southern Union Refining Company near Hobbs, New Mexico. This report contains a description of the upgraded system, and a summary of the overall operation, maintenance and performance of the installed system. The results of the upgrade activities can be seen in the last two months of operational data. Steam production was significantly greater in peak flow and monthly totalmore » than at any previous time. Also monthly total cost savings was greatly improved even though natural gas costs remain much lower than originally anticipated.« less
Microstructures and Grain Refinement of Additive-Manufactured Ti- xW Alloys
NASA Astrophysics Data System (ADS)
Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian W.; Rolchigo, Matt R.; LeSar, Richard; Collins, Peter C.
2017-07-01
It is necessary to better understand the composition-processing-microstructure relationships that exist for materials produced by additive manufacturing. To this end, Laser Engineered Net Shaping (LENS™), a type of additive manufacturing, was used to produce a compositionally graded titanium binary model alloy system (Ti- xW specimen (0 ≤ x ≤ 30 wt pct), so that relationships could be made between composition, processing, and the prior beta grain size. Importantly, the thermophysical properties of the Ti- xW, specifically its supercooling parameter ( P) and growth restriction factor ( Q), are such that grain refinement is expected and was observed. The systematic, combinatorial study of this binary system provides an opportunity to assess the mechanisms by which grain refinement occurs in Ti-based alloys in general, and for additive manufacturing in particular. The operating mechanisms that govern the relationship between composition and grain size are interpreted using a model originally developed for aluminum and magnesium alloys and subsequently applied for titanium alloys. The prior beta grain factor observed and the interpretations of their correlations indicate that tungsten is a good grain refiner and such models are valid to explain the grain-refinement process. By extension, other binary elements or higher order alloy systems with similar thermophysical properties should exhibit similar grain refinement.
3Drefine: an interactive web server for efficient protein structure refinement
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-01-01
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
Modeling pH-zone refining countercurrent chromatography: a dynamic approach.
Kotland, Alexis; Chollet, Sébastien; Autret, Jean-Marie; Diard, Catherine; Marchal, Luc; Renault, Jean-Hugues
2015-04-24
A model based on mass transfer resistances and acid-base equilibriums at the liquid-liquid interface was developed for the pH-zone refining mode when it is used in countercurrent chromatography (CCC). The binary separation of catharanthine and vindoline, two alkaloids used as starting material for the semi-synthesis of chemotherapy drugs, was chosen for the model validation. Toluene/CH3CN/water (4/1/5, v/v/v) was selected as biphasic solvent system. First, hydrodynamics and mass transfer were studied by using chemical tracers. Trypan blue only present in the aqueous phase allowed the determination of the parameters τextra and Pe for hydrodynamic characterization whereas acetone, which partitioned between the two phases, allowed the determination of the transfer parameter k0a. It was shown that mass transfer was improved by increasing both flow rate and rotational speed, which is consistent with the observed mobile phase dispersion. Then, the different transfer parameters of the model (i.e. the local transfer coefficient for the different species involved in the process) were determined by fitting experimental concentration profiles. The model accurately predicted both equilibrium and dynamics factors (i.e. local mass transfer coefficients and acid-base equilibrium constant) variation with the CCC operating conditions (cell number, flow rate, rotational speed and thus stationary phase retention). The initial hypotheses (the acid-base reactions occurs instantaneously at the interface and the process is mainly governed by mass transfer) are thus validated. Finally, the model was used as a tool for catharanthine and vindoline separation prediction in the whole experimental domain that corresponded to a flow rate between 20 and 60 mL/min and rotational speeds from 900 and 2100 rotation per minutes. Copyright © 2015 Elsevier B.V. All rights reserved.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
MAIN software for density averaging, model building, structure refinement and validation
Turk, Dušan
2013-01-01
MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.
2014-05-01
Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less
Simulation of the shallow groundwater-flow system near the Hayward Airport, Sawyer County, Wisconsin
Hunt, Randall J.; Juckem, Paul F.; Dunning, Charles P.
2010-01-01
There are concerns that removal and trimming of vegetation during expansion of the Hayward Airport in Sawyer County, Wisconsin, could appreciably change the character of a nearby cold-water stream and its adjacent environs. In cooperation with the Wisconsin Department of Transportation, a two-dimensional, steady-state groundwater-flow model of the shallow groundwater-flow system near the Hayward Airport was refined from a regional model of the area. The parameter-estimation code PEST was used to obtain a best fit of the model to additional field data collected in February 2007 as part of this study. The additional data were collected during an extended period of low runoff and consisted of water levels and streamflows near the Hayward Airport. Refinements to the regional model included one additional hydraulic-conductivity zone for the airport area, and three additional parameters for streambed resistance in a northern tributary to the Namekagon River and in the main stem of the Namekagon River. In the refined Hayward Airport area model, the calibrated hydraulic conductivity was 11.2 feet per day, which is within the 58.2 to 7.9 feet per day range reported for the regional glacial and sandstone aquifer, and is consistent with a silty soil texture for the area. The calibrated refined model had a best fit of 8.6 days for the streambed resistance of the Namekagon River and between 0.6 and 1.6 days for the northern tributary stream. The previously reported regional groundwater-recharge rate of 10.1 inches per year was adjusted during calibration of the refined model in order to match streamflows measured during the period of extended low runoff; this resulted in an optimal groundwater-recharge rate of 7.1 inches per year during this period. The refined model was then used to simulate the capture zone of the northern tributary to the Namekagon River.
Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.
Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun
2018-06-01
Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.
Region-Based Building Rooftop Extraction and Change Detection
NASA Astrophysics Data System (ADS)
Tian, J.; Metzlaff, L.; d'Angelo, P.; Reinartz, P.
2017-09-01
Automatic extraction of building changes is important for many applications like disaster monitoring and city planning. Although a lot of research work is available based on 2D as well as 3D data, an improvement in accuracy and efficiency is still needed. The introducing of digital surface models (DSMs) to building change detection has strongly improved the resulting accuracy. In this paper, a post-classification approach is proposed for building change detection using satellite stereo imagery. Firstly, DSMs are generated from satellite stereo imagery and further refined by using a segmentation result obtained from the Sobel gradients of the panchromatic image. Besides the refined DSMs, the panchromatic image and the pansharpened multispectral image are used as input features for mean-shift segmentation. The DSM is used to calculate the nDSM, out of which the initial building candidate regions are extracted. The candidate mask is further refined by morphological filtering and by excluding shadow regions. Following this, all segments that overlap with a building candidate region are determined. A building oriented segments merging procedure is introduced to generate a final building rooftop mask. As the last step, object based change detection is performed by directly comparing the building rooftops extracted from the pre- and after-event imagery and by fusing the change indicators with the roof-top region map. A quantitative and qualitative assessment of the proposed approach is provided by using WorldView-2 satellite data from Istanbul, Turkey.
Refined open intersection numbers and the Kontsevich-Penner matrix model
NASA Astrophysics Data System (ADS)
Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.
2017-03-01
A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.
A lithospheric magnetic field model derived from the Swarm satellite magnetic field measurements
NASA Astrophysics Data System (ADS)
Hulot, G.; Thebault, E.; Vigneron, P.
2015-12-01
The Swarm constellation of satellites was launched in November 2013 and has since then delivered high quality scalar and vector magnetic field measurements. A consortium of several research institutions was selected by the European Space Agency (ESA) to provide a number of scientific products which will be made available to the scientific community. Within this framework, specific tools were tailor-made to better extract the magnetic signal emanating from Earth's the lithospheric. These tools rely on the scalar gradient measured by the lower pair of Swarm satellites and rely on a regional modeling scheme that is more sensitive to small spatial scales and weak signals than the standard spherical harmonic modeling. In this presentation, we report on various activities related to data analysis and processing. We assess the efficiency of this dedicated chain for modeling the lithospheric magnetic field using more than one year of measurements, and finally discuss refinements that are continuously implemented in order to further improve the robustness and the spatial resolution of the lithospheric field model.
Refining atmosphere light to improve the dark channel prior algorithm
NASA Astrophysics Data System (ADS)
Gan, Ling; Li, Dagang; Zhou, Can
2017-05-01
The defogging image gotten through dark channel prior algorithm has some shortcomings, such like color distortion, dimmer light and detail-loss near the observer. The main reasons are that the atmosphere light is estimated as one value and its change in different scene depth is not considered. So we modeled the atmosphere, one parameter of the defogging model. Firstly, we scatter the atmosphere light into equivalent point and build discrete model of the light. Secondly, we build some rough and possible models through analyzing the relationship between the atmosphere light and the medium transmission. Finally, by analyzing the results of many experiments qualitatively and quantitatively, we get the selected and optimized model. Although using this method causes the time-consuming to increase slightly, the evaluations, histogram correlation coefficient and peak signal-to-noise ratio are improved significantly and the defogging result is more conformed to human visual. And the color and the details near the observer in the defogging image are better than that achieved by the primal method.
Liao, Sheng-hui; Zhu, Xing-hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi
2016-01-01
The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers. PMID:27403424
Liao, Sheng-Hui; Zhu, Xing-Hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi
2016-01-01
The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers.
Hydrogen ADPs with Cu Kα data? Invariom and Hirshfeld atom modelling of fluconazole.
Orben, Claudia M; Dittrich, Birger
2014-06-01
For the structure of fluconazole [systematic name: 2-(2,4-difluorophenyl)-1,3-bis(1H-1,2,4-triazol-1-yl)propan-2-ol] monohydrate, C13H12F2N6O·H2O, a case study on different model refinements is reported, based on single-crystal X-ray diffraction data measured at 100 K with Cu Kα radiation to a resolution of sin θ/λ of 0.6 Å(-1). The structure, anisotropic displacement parameters (ADPs) and figures of merit from the independent atom model are compared to `invariom' and `Hirshfeld atom' refinements. Changing from a spherical to an aspherical atom model lowers the figures of merit and improves both the accuracy and the precision of the geometrical parameters. Differences between results from the two aspherical-atom refinements are small. However, a refinement of ADPs for H atoms is only possible with the Hirshfeld atom density model. It gives meaningful results even at a resolution of 0.6 Å(-1), but requires good low-order data.
ERIC Educational Resources Information Center
Lee, Connie W.; Hinson, Tony M.
This publication is the final report of a 21-month project designed to (1) expand and refine the computer capabilities of the Vocational-Technical Education Consortium of States (V-TECS) to ensure rapid data access for generating routine and special occupational data-based reports; (2) develop and implement a computer storage and retrieval system…
Variable strategy model of the human operator
NASA Astrophysics Data System (ADS)
Phillips, John Michael
Human operators often employ discontinuous or "bang-bang" control strategies when performing large-amplitude acquisition tasks. The current study applies Variable Structure Control (VSC) techniques to model human operator behavior during acquisition tasks. The result is a coupled, multi-input model replicating the discontinuous control strategy. In the VSC formulation, a switching surface is the mathematical representation of the operator's control strategy. The performance of the Variable Strategy Model (VSM) is evaluated by considering several examples, including the longitudinal control of an aircraft during the visual landing task. The aircraft landing task becomes an acquisition maneuver whenever large initial offsets occur. Several different strategies are explored in the VSM formulation for the aircraft landing task. First, a switching surface is constructed from literal interpretations of pilot training literature. This approach yields a mathematical representation of how a pilot is trained to fly a generic aircraft. This switching surface is shown to bound the trajectory response of a group of pilots performing an offset landing task in an aircraft simulator study. Next, front-side and back-side landing strategies are compared. A back-side landing strategy is found to be capable of landing an aircraft flying on either the front side or back side of the power curve. However, the front-side landing strategy is found to be insufficient for landing an aircraft flying on the back side. Finally, a more refined landing strategy is developed that takes into the account the specific aircraft's dynamic characteristics. The refined strategy is translated back into terminology similar to the existing pilot training literature.
Harte, P.T.; Mack, Thomas J.
1992-01-01
Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.
NASA Astrophysics Data System (ADS)
Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun
2018-04-01
A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joosten, Robbie P.; Womack, Thomas; Vriend, Gert, E-mail: vriend@cmbi.ru.nl
2009-02-01
An evaluation of validation and real-space intervention possibilities for improving existing automated (re-)refinement methods. The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation andmore » difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.« less
Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M
2014-05-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.
2009-01-01
This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.
Yes, one can obtain better quality structures from routine X-ray data collection.
Sanjuan-Szklarz, W Fabiola; Hoser, Anna A; Gutmann, Matthias; Madsen, Anders Østergaard; Woźniak, Krzysztof
2016-01-01
Single-crystal X-ray diffraction structural results for benzidine dihydrochloride, hydrated and protonated N,N,N,N-peri(dimethylamino)naphthalene chloride, triptycene, dichlorodimethyltriptycene and decamethylferrocene have been analysed. A critical discussion of the dependence of structural and thermal parameters on resolution for these compounds is presented. Results of refinements against X-ray data, cut off to different resolutions from the high-resolution data files, are compared to structural models derived from neutron diffraction experiments. The Independent Atom Model (IAM) and the Transferable Aspherical Atom Model (TAAM) are tested. The average differences between the X-ray and neutron structural parameters (with the exception of valence angles defined by H atoms) decrease with the increasing 2θmax angle. The scale of differences between X-ray and neutron geometrical parameters can be significantly reduced when data are collected to the higher, than commonly used, 2θmax diffraction angles (for Mo Kα 2θmax > 65°). The final structural and thermal parameters obtained for the studied compounds using TAAM refinement are in better agreement with the neutron values than the IAM results for all resolutions and all compounds. By using TAAM, it is still possible to obtain accurate results even from low-resolution X-ray data. This is particularly important as TAAM is easy to apply and can routinely be used to improve the quality of structural investigations [Dominiak (2015 ▸). LSDB from UBDB. University of Buffalo, USA]. We can recommend that, in order to obtain more adequate (more accurate and precise) structural and displacement parameters during the IAM model refinement, data should be collected up to the larger diffraction angles, at least, for Mo Kα radiation to 2θmax = 65° (sin θmax/λ < 0.75 Å(-1)). The TAAM approach is a very good option to obtain more adequate results even using data collected to the lower 2θmax angles. Also the results of translation-libration-screw (TLS) analysis and vibrational entropy values are more reliable for 2θmax > 65°.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.
The EGM2008 Global Gravitational Model
NASA Astrophysics Data System (ADS)
Pavlis, N. K.; Holmes, S. A.; Kenyon, S. C.; Factor, J. K.
2008-12-01
The development of a new Earth Gravitational Model (EGM) to degree 2160 has been completed. This model, designated EGM2008, is the product of the final re-iteration of our modelling and estimation approach. Our multi-year effort has produced several Preliminary Gravitational Models (PGM) of increasingly improved performance. One of these models (PGM2007A) was provided for evaluation to an independent Evaluation Working Group, sponsored by the International Association of Geodesy (IAG). In an effort to address certain shortcomings of PGM2007A, we have considered the feedback that we received from this Working Group. As part of this effort, EGM2008 incorporates an improved version of our 5'x5' global gravity anomaly database and has benefited from the latest GRACE based satellite-only solutions (e.g., ITG- GRACE03S). EGM2008 incorporates an improved ocean-wide set of altimetry-derived gravity anomalies that were estimated using PGM2007B (a variant of PGM2007A) and its associated Dynamic Ocean Topography (DOT) model as reference models in a "Remove-Compute-Restore" fashion. For the Least Squares Collocation estimation of our final global 5'x5' area-mean gravity anomaly database, we have used consistently PGM2007B as our reference model to degree 2160. We have developed and used a formulation that predicts area-mean gravity anomalies that are effectively band-limited to degree 2160, thereby minimizing aliasing effects during the harmonic analysis process. We have also placed special emphasis on the refinement and "calibration" of the error estimates that accompany our final combination solution EGM2008. We present the main aspects of the model's development and evaluation. This evaluation was accomplished primarily through the comparison of various model derived quantities with independent data and models (e.g., geoid undulations derived from GPS positioning and spirit levelling, astronomical deflections of the vertical, etc.). We will also present comparisons of our model-implied Dynamic Ocean Topography with other contemporary estimates (e.g., from ECCO).
NASA Astrophysics Data System (ADS)
Rabin, Sam S.; Ward, Daniel S.; Malyshev, Sergey L.; Magi, Brian I.; Shevliakova, Elena; Pacala, Stephen W.
2018-03-01
This study describes and evaluates the Fire Including Natural & Agricultural Lands model (FINAL) which, for the first time, explicitly simulates cropland and pasture management fires separately from non-agricultural fires. The non-agricultural fire module uses empirical relationships to simulate burned area in a quasi-mechanistic framework, similar to past fire modeling efforts, but with a novel optimization method that improves the fidelity of simulated fire patterns to new observational estimates of non-agricultural burning. The agricultural fire components are forced with estimates of cropland and pasture fire seasonality and frequency derived from observational land cover and satellite fire datasets. FINAL accurately simulates the amount, distribution, and seasonal timing of burned cropland and pasture over 2001-2009 (global totals: 0.434×106 and 2.02×106 km2 yr-1 modeled, 0.454×106 and 2.04×106 km2 yr-1 observed), but carbon emissions for cropland and pasture fire are overestimated (global totals: 0.295 and 0.706 PgC yr-1 modeled, 0.194 and 0.538 PgC yr-1 observed). The non-agricultural fire module underestimates global burned area (1.91×106 km2 yr-1 modeled, 2.44×106 km2 yr-1 observed) and carbon emissions (1.14 PgC yr-1 modeled, 1.84 PgC yr-1 observed). The spatial pattern of total burned area and carbon emissions is generally well reproduced across much of sub-Saharan Africa, Brazil, Central Asia, and Australia, whereas the boreal zone sees underestimates. FINAL represents an important step in the development of global fire models, and offers a strategy for fire models to consider human-driven fire regimes on cultivated lands. At the regional scale, simulations would benefit from refinements in the parameterizations and improved optimization datasets. We include an in-depth discussion of the lessons learned from using the Levenberg-Marquardt algorithm in an interactive optimization for a dynamic global vegetation model.
TLS from fundamentals to practice
Urzhumtsev, Alexandre; Afonine, Pavel V.; Adams, Paul D.
2014-01-01
The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided. PMID:25249713
Building Excellence in Project Execution: Integrated Project Management
2015-04-30
challenge by adopting and refining the CMMI Model and building the tenets of integrated project management (IPM) into project planning and execution...Systems Center Pacific (SSC Pacific) is addressing this challenge by adopting and refining the CMMI Model, and building the tenets of integrated project...successfully managing stakeholder expectations and meeting requirements. Under the Capability Maturity Model Integration ( CMMI ), IPM is defined as
ERIC Educational Resources Information Center
Lee, Chia-Jung; Kim, ChanMin
2014-01-01
This study presents a refined technological pedagogical content knowledge (also known as TPACK) based instructional design model, which was revised using findings from the implementation study of a prior model. The refined model was applied in a technology integration course with 38 preservice teachers. A case study approach was used in this…
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries.
Joosten, Robbie P; Womack, Thomas; Vriend, Gert; Bricogne, Gérard
2009-02-01
The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation and difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Increasing the Cryogenic Toughness of Steels
NASA Technical Reports Server (NTRS)
Rush, H. F.
1986-01-01
Grain-refining heat treatments increase toughness without substantial strength loss. Five alloys selected for study, all at or near technological limit. Results showed clearly grain sizes of these alloys refined by such heat treatments and grain refinement results in large improvement in toughness without substantial loss in strength. Best improvements seen in HP-9-4-20 Steel, at low-strength end of technological limit, and in Maraging 200, at high-strength end. These alloys, in grain refined condition, considered for model applications in high-Reynolds-number cryogenic wind tunnels.
Customizing G Protein-coupled receptor models for structure-based virtual screening.
de Graaf, Chris; Rognan, Didier
2009-01-01
This review will focus on the construction, refinement, and validation of G Protein-coupled receptor models for the purpose of structure-based virtual screening. Practical tips and tricks derived from concrete modeling and virtual screening exercises to overcome the problems and pitfalls associated with the different steps of the receptor modeling workflow will be presented. These examples will not only include rhodopsin-like (class A), but also secretine-like (class B), and glutamate-like (class C) receptors. In addition, the review will present a careful comparative analysis of current crystal structures and their implication on homology modeling. The following themes will be discussed: i) the use of experimental anchors in guiding the modeling procedure; ii) amino acid sequence alignments; iii) ligand binding mode accommodation and binding cavity expansion; iv) proline-induced kinks in transmembrane helices; v) binding mode prediction and virtual screening by receptor-ligand interaction fingerprint scoring; vi) extracellular loop modeling; vii) virtual filtering schemes. Finally, an overview of several successful structure-based screening shows that receptor models, despite structural inaccuracies, can be efficiently used to find novel ligands.
Nonrelativistic Yang-Mills theory for a naturally light Higgs boson
NASA Astrophysics Data System (ADS)
Berthier, Laure; Grosvenor, Kevin T.; Yan, Ziqi
2017-11-01
We continue the study of the nonrelativistic short-distance completions of a naturally light Higgs, focusing on the interplay between the gauge symmetries and the polynomial shift symmetries. We investigate the naturalness of nonrelativistic scalar quantum electrodynamics with a dynamical critical exponent z =3 by computing leading power law divergences to the scalar propagator in this theory. We find that power law divergences exhibit a more refined structure in theories that lack boost symmetries. Finally, in this toy model, we show that it is possible to preserve a fairly large hierarchy between the scalar mass and the high-energy naturalness scale across 7 orders of magnitude, while accommodating a gauge coupling of order 0.1.
EXPOSURES AND INTERNAL DOSES OF ...
The National Center for Environmental Assessment (NCEA) has released a final report that presents and applies a method to estimate distributions of internal concentrations of trihalomethanes (THMs) in humans resulting from a residential drinking water exposure. The report presents simulations of oral, dermal and inhalation exposures and demonstrates the feasibility of linking the US EPA’s information Collection Rule database with other databases on external exposure factors and physiologically based pharmacokinetic modeling to refine population-based estimates of exposure. Review Draft - by 2010, develop scientifically sound data and approaches to assess and manage risks to human health posed by exposure to specific regulated waterborne pathogens and chemicals, including those addressed by the Arsenic, M/DBP and Six-Year Review Rules.
A new instrument to measure pre-service primary teachers' attitudes to teaching mathematics
NASA Astrophysics Data System (ADS)
Nisbet, Steven
1991-06-01
This article outlines the development of an instrument to measure pre-service primary teachers' attitudes to teaching mathematics. A trial questionnaire was devised using the set of Fennema-Sherman scales on students' attitudes to the subject mathematics as a model. Analysis of the responses to the questionnaire by 155 student teachers was carried out to develop meaningful attitude scales and to refine the instrument. The end-product is a new instrument which can be used to monitor the attitudes of student teachers. The attitude scales identified in the analysis and built into the final form of the questionnaire are (i) anxiety, (ii) confidence and enjoyment, (iii) desire for recognition and (iv) pressure to conform.
Diagnosis and Management of Fetal Growth Restriction
Bamfo, Jacqueline E. A. K.; Odibo, Anthony O.
2011-01-01
Fetal growth restriction (FGR) remains a leading contributor to perinatal mortality and morbidity and metabolic syndrome in later life. Recent advances in ultrasound and Doppler have elucidated several mechanisms in the evolution of the disease. However, consistent classification and characterization regarding the severity of FGR is lacking. There is no cure, and management is reliant on a structured antenatal surveillance program with timely intervention. Hitherto, the time to deliver is an enigma. In this paper, the challenges in the diagnosis and management of FGR are discussed. The biophysical profile, Doppler, biochemical and molecular technologies that may refine management are reviewed. Finally, a model pathway for the clinical management of pregnancies complicated by FGR is presented. PMID:21547092
Experimental and analytical research on the aerodynamics of wind driven turbines. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohrbach, C.; Wainauski, H.; Worobel, R.
1977-12-01
This aerodynamic research program was aimed at providing a reliable, comprehensive data base on a series of wind turbine models covering a broad range of the prime aerodynamic and geometric variables. Such data obtained under controlled laboratory conditions on turbines designed by the same method, of the same size, and tested in the same wind tunnel had not been available in the literature. Moreover, this research program was further aimed at providing a basis for evaluating the adequacy of existing wind turbine aerodynamic design and performance methodology, for assessing the potential of recent advanced theories and for providing a basismore » for further method development and refinement.« less
3Drefine: an interactive web server for efficient protein structure refinement.
Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin
2016-07-08
3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Acute, subchronic, and developmental toxicological properties of lubricating oil base stocks.
Dalbey, Walden E; McKee, Richard H; Goyak, Katy Olsavsky; Biles, Robert W; Murray, Jay; White, Russell
2014-01-01
Lubricating oil base stocks (LOBs) are substances used in the manufacture of finished lubricants and greases. They are produced from residue remaining after atmospheric distillation of crude oil that is subsequently fractionated by vacuum distillation and additional refining steps. Initial LOB streams that have been produced by vacuum distillation but not further refined may contain polycyclic aromatic compounds (PACs) and may present carcinogenic hazards. In modern refineries, LOBs are further refined by multistep processes including solvent extraction and/or hydrogen treatment to reduce the levels of PACs and other undesirable constituents. Thus, mildly (insufficiently) refined LOBs are potentially more hazardous than more severely (sufficiently) refined LOBs. This article discusses the evaluation of LOBs using statistical models based on content of PACs; these models indicate that insufficiently refined LOBs (potentially carcinogenic LOBs) can also produce systemic and developmental effects with repeated dermal exposure. Experimental data were also obtained in ten 13-week dermal studies in rats, eight 4-week dermal studies in rabbits, and seven dermal developmental toxicity studies with sufficiently refined LOBs (noncarcinogenic and commonly marketed) in which no observed adverse effect levels for systemic toxicity and developmental toxicity were 1000 to 2000 mg/kg/d with dermal exposures, typically the highest dose tested. Results in both oral and inhalation developmental toxicity studies were similar. This absence of toxicologically relevant findings was consistent with lower PAC content of sufficiently refined LOBs. Based on data on reproductive organs with repeated dosing and parameters in developmental toxicity studies, sufficiently refined LOBs are likely to have little, if any, effect on reproductive parameters.
First-order system least squares and the energetic variational approach for two-phase flow
NASA Astrophysics Data System (ADS)
Adler, J. H.; Brannick, J.; Liu, C.; Manteuffel, T.; Zikatanov, L.
2011-07-01
This paper develops a first-order system least-squares (FOSLS) formulation for equations of two-phase flow. The main goal is to show that this discretization, along with numerical techniques such as nested iteration, algebraic multigrid, and adaptive local refinement, can be used to solve these types of complex fluid flow problems. In addition, from an energetic variational approach, it can be shown that an important quantity to preserve in a given simulation is the energy law. We discuss the energy law and inherent structure for two-phase flow using the Allen-Cahn interface model and indicate how it is related to other complex fluid models, such as magnetohydrodynamics. Finally, we show that, using the FOSLS framework, one can still satisfy the appropriate energy law globally while using well-known numerical techniques.
Improving virtual screening of G protein-coupled receptors via ligand-directed modeling
Simms, John; Christopoulos, Arthur; Wootten, Denise
2017-01-01
G protein-coupled receptors (GPCRs) play crucial roles in cell physiology and pathophysiology. There is increasing interest in using structural information for virtual screening (VS) of libraries and for structure-based drug design to identify novel agonist or antagonist leads. However, the sparse availability of experimentally determined GPCR/ligand complex structures with diverse ligands impedes the application of structure-based drug design (SBDD) programs directed to identifying new molecules with a select pharmacology. In this study, we apply ligand-directed modeling (LDM) to available GPCR X-ray structures to improve VS performance and selectivity towards molecules of specific pharmacological profile. The described method refines a GPCR binding pocket conformation using a single known ligand for that GPCR. The LDM method is a computationally efficient, iterative workflow consisting of protein sampling and ligand docking. We developed an extensive benchmark comparing LDM-refined binding pockets to GPCR X-ray crystal structures across seven different GPCRs bound to a range of ligands of different chemotypes and pharmacological profiles. LDM-refined models showed improvement in VS performance over origin X-ray crystal structures in 21 out of 24 cases. In all cases, the LDM-refined models had superior performance in enriching for the chemotype of the refinement ligand. This likely contributes to the LDM success in all cases of inhibitor-bound to agonist-bound binding pocket refinement, a key task for GPCR SBDD programs. Indeed, agonist ligands are required for a plethora of GPCRs for therapeutic intervention, however GPCR X-ray structures are mostly restricted to their inactive inhibitor-bound state. PMID:29131821
PDB_REDO: automated re-refinement of X-ray structure models in the PDB.
Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert
2009-06-01
Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.
Structural Health Monitoring of Large Structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.
1994-01-01
This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.
Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.
2014-01-01
The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poulin, Patrick, E-mail: patrick-poulin@videotron.ca; Ekins, Sean; Department of Pharmaceutical Sciences, School of Pharmacy, University of Maryland, 20 Penn Street, Baltimore, MD 21201
A general toxicity of basic drugs is related to phospholipidosis in tissues. Therefore, it is essential to predict the tissue distribution of basic drugs to facilitate an initial estimate of that toxicity. The objective of the present study was to further assess the original prediction method that consisted of using the binding to red blood cells measured in vitro for the unbound drug (RBCu) as a surrogate for tissue distribution, by correlating it to unbound tissue:plasma partition coefficients (Kpu) of several tissues, and finally to predict volume of distribution at steady-state (V{sub ss}) in humans under in vivo conditions. Thismore » correlation method demonstrated inaccurate predictions of V{sub ss} for particular basic drugs that did not follow the original correlation principle. Therefore, the novelty of this study is to provide clarity on the actual hypotheses to identify i) the impact of pharmacological mode of action on the generic correlation of RBCu-Kpu, ii) additional mechanisms of tissue distribution for the outlier drugs, iii) molecular features and properties that differentiate compounds as outliers in the original correlation analysis in order to facilitate its applicability domain alongside the properties already used so far, and finally iv) to present a novel and refined correlation method that is superior to what has been previously published for the prediction of human V{sub ss} of basic drugs. Applying a refined correlation method after identifying outliers would facilitate the prediction of more accurate distribution parameters as key inputs used in physiologically based pharmacokinetic (PBPK) and phospholipidosis models.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
...EPA is issuing a direct final rule to amend the diesel sulfur regulations to allow refiners, importers, distributors, and retailers of highway diesel fuel the option to use an alternative affirmative defense if the Agency finds highway diesel fuel samples above the specified sulfur standard at retail facilities. This alternative defense consists of a comprehensive program of quality assurance sampling and testing that would cover all participating companies that produce and/or distribute highway diesel fuel if certain other conditions are met. The sampling and testing program would be carried out by an independent surveyor. The program would be conducted pursuant to a survey plan approved by EPA that is designed to achieve the same objectives as the current regulatory quality assurance requirement. This rule also amends the gasoline benzene regulations to allow disqualified small refiners the same opportunity to generate gasoline benzene credits as that afforded to non-small refiners.
Refined Simulation of Satellite Laser Altimeter Full Echo Waveform
NASA Astrophysics Data System (ADS)
Men, H.; Xing, Y.; Li, G.; Gao, X.; Zhao, Y.; Gao, X.
2018-04-01
The return waveform of satellite laser altimeter plays vital role in the satellite parameters designation, data processing and application. In this paper, a method of refined full waveform simulation is proposed based on the reflectivity of the ground target, the true emission waveform and the Laser Profile Array (LPA). The ICESat/GLAS data is used as the validation data. Finally, we evaluated the simulation accuracy with the correlation coefficient. It was found that the accuracy of echo simulation could be significantly improved by considering the reflectivity of the ground target and the emission waveform. However, the laser intensity distribution recorded by the LPA has little effect on the echo simulation accuracy when compared with the distribution of the simulated laser energy. At last, we proposed a refinement idea by analyzing the experimental results, in the hope of providing references for the waveform data simulation and processing of GF-7 satellite in the future.
Aerodynamic analysis of natural flapping flight using a lift model based on spanwise flow
NASA Astrophysics Data System (ADS)
Alford, Lionel D., Jr.
This study successfully described the mechanics of flapping hovering flight within the framework of conventional aerodynamics. Additionally, the theory proposed and supported by this research provides an entirely new way of looking at animal flapping flight. The mechanisms of biological flight are not well understood, and researchers have not been able to describe them using conventional aerodynamic forces. This study proposed that natural flapping flight can be broken down into a simplest model, that this model can then be used to develop a mathematical representation of flapping hovering flight, and finally, that the model can be successfully refined and compared to biological flapping data. This paper proposed a unique theory that the lift of a flapping animal is primarily the result of velocity across the cambered span of the wing. A force analysis was developed using centripetal acceleration to define an acceleration profile that would lead to a spanwise velocity profile. The force produced by the spanwise velocity profile was determined using a computational fluid dynamics analysis of flow on the simplified wing model. The overall forces on the model were found to produce more than twice the lift required for hovering flight. In addition, spanwise lift was shown to generate induced drag on the wing. Induced drag increased both the model wing's lift and drag. The model allowed the development of a mathematical representation that could be refined to account for insect hovering characteristics and that could predict expected physical attributes of the fluid flow. This computational representation resulted in a profile of lift and drag production that corresponds to known force profiles for insect flight. The model of flapping flight was shown to produce results similar to biological observation and experiment, and these results can potentially be applied to the study of other flapping animals. This work provides a foundation on which to base further exploration and hypotheses regarding flapping flight.
Generation and Validation of the iKp1289 Metabolic Model for Klebsiella pneumoniae KPPR1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, Christopher S.; Rotman, Ella; Lathem, Wyndham W.
Klebsiella pneumoniae has a reputation for causing a wide range of infectious conditions, with numerous highly virulent and antibiotic-resistant strains. Metabolic models have the potential to provide insights into the growth behavior, nutrient requirements, essential genes, and candidate drug targets in these strains. Here we develop a metabolic model for KPPR1, a highly virulent strain of K. pneumoniae. We apply a combination of Biolog phenotype data and fitness data to validate and refine our KPPR1 model. The final model displays a predictive accuracy of 75% in identifying potential carbon and nitrogen sources for K. pneumoniae and of 99% in predictingmore » nonessential genes in rich media. We demonstrate how this model is useful in studying the differences in the metabolic capabilities of the low-virulence MGH 78578 strain and the highly virulent KPPR1 strain. For example, we demonstrate that these strains differ in carbohydrate metabolism, including the ability to metabolize dulcitol as a primary carbon source. Our model makes numerous other predictions for follow-up verification and analysis.« less
Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.
2002-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
Random-Forest Classification of High-Resolution Remote Sensing Images and Ndsm Over Urban Areas
NASA Astrophysics Data System (ADS)
Sun, X. F.; Lin, X. G.
2017-09-01
As an intermediate step between raw remote sensing data and digital urban maps, remote sensing data classification has been a challenging and long-standing research problem in the community of remote sensing. In this work, an effective classification method is proposed for classifying high-resolution remote sensing data over urban areas. Starting from high resolution multi-spectral images and 3D geometry data, our method proceeds in three main stages: feature extraction, classification, and classified result refinement. First, we extract color, vegetation index and texture features from the multi-spectral image and compute the height, elevation texture and differential morphological profile (DMP) features from the 3D geometry data. Then in the classification stage, multiple random forest (RF) classifiers are trained separately, then combined to form a RF ensemble to estimate each sample's category probabilities. Finally the probabilities along with the feature importance indicator outputted by RF ensemble are used to construct a fully connected conditional random field (FCCRF) graph model, by which the classification results are refined through mean-field based statistical inference. Experiments on the ISPRS Semantic Labeling Contest dataset show that our proposed 3-stage method achieves 86.9% overall accuracy on the test data.
Defining the role of a forensic hospital registered nurse using the Delphi method.
Newman, Claire; Patterson, Karen; Eason, Michelle; Short, Ben
2016-11-01
A Delphi survey was undertaken to refine the position description of a registered nurse working in a forensic hospital, in New South Wales, Australia. Prior to commencing operation in 2008, position descriptions were developed from a review of legislation, as well as policies and procedures used by existing forensic mental health services in Australia. With an established workforce and an evolving model of care, a review of the initial registered nurse position description was required. An online Delphi survey was undertaken. Eight executive (88.9%) and 12 (58.3%) senior nursing staff participated in the first survey round. A total of four survey rounds were completed. At the final round, there was consensus (70%) that the revised position description was either very or somewhat suitable. There were a total of nine statements, from 31 originally produced in round 1, that did not reach consensus. The Delphi survey enabled a process for refining the Forensic Hospital registered nurse position description. Methods that facilitate executive and senior nursing staff consensus in the development and review of position descriptions should be considered in nursing management. © 2016 John Wiley & Sons Ltd.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neal, C.R.; Davidson, J.P.
The Malaitan alnoite contains a rich and varied megacryst suite of unprecedented compositional range. The authors have undertaken trace element and isotope modeling in order to formulate a petrogenetic scheme which links the host alnoeite to its entrained megacrysts. This requires that a proto-alnoeite magma is the product of zone refining initiated by diapiric upwelling (where the initial melt passes through 200 times its volume of mantle). Isotopic evidence indicates the source of the proto-alnoeite contains a time-integrated LREE-depleted signature. Impingement upon the rigid lithosphere halts or dramatically slows the upward progress of the mantle diapir. At this point, themore » magma cools and megacryst fractionation begins with augites crystallizing first, followed by subcalcic diopsides and finally phlogopites. Garnet probably crystallizes over the entire range of clinopyroxene fractionation. Estimated proportions of fractionating phases are 30% augite, 24.5% subcalcic diopside, 27% garnet, 12.9% phlogopite, 5% bronzite, 0.5% ilmenite, and 0.1% zircon. As this proto-alnoeite magma crystallizes, it assimilates a subducted component of seawater-altered basalt which underplates the Ontong Java Plateau. This is witnessed in the isotopic composition of the megacrysts and alnoeite.« less
On a High-Fidelity Hierarchical Approach to Buckling Load Calculations
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.
2001-01-01
As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriarty, Nigel W.; Draizen, Eli J.; Adams, Paul D.
Chemical restraints for use in macromolecular structure refinement are produced by a variety of methods, including a number of programs that use chemical information to generate the required bond, angle, dihedral, chiral and planar restraints. These programs help to automate the process and therefore minimize the errors that could otherwise occur if it were performed manually. Furthermore, restraint-dictionary generation programs can incorporate chemical and other prior knowledge to provide reasonable choices of types and values. However, the use of restraints to define the geometry of a molecule is an approximation introduced with efficiency in mind. The representation of a bondmore » as a parabolic function is a convenience and does not reflect the true variability in even the simplest of molecules. Another complicating factor is the interplay of the molecule with other parts of the macromolecular model. Finally, difficult situations arise from molecules with rare or unusual moieties that may not have their conformational space fully explored. These factors give rise to the need for an interactive editor for WYSIWYG interactions with the restraints and molecule. Restraints Editor, Especially Ligands (REEL) is a graphical user interface for simple and error-free editing along with additional features to provide greater control of the restraint dictionaries in macromolecular refinement.« less
FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry
NASA Astrophysics Data System (ADS)
Dai, Jisheng; Liu, An; Lau, Vincent K. N.
2018-05-01
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
Defining Gas Turbine Engine Performance Requirements for the Large Civil TiltRotor (LCTR2)
NASA Technical Reports Server (NTRS)
Snyder, Christopher A.
2013-01-01
Defining specific engine requirements is a critical part of identifying technologies and operational models for potential future rotary wing vehicles. NASA's Fundamental Aeronautics Program, Subsonic Rotary Wing Project has identified the Large Civil TiltRotor (LCTR) as the configuration to best meet technology goals. This notional vehicle concept has evolved with more clearly defined mission and operational requirements to the LCTR-iteration 2 (LCTR2). This paper reports on efforts to further review and refine the LCTR2 analyses to ascertain specific engine requirements and propulsion sizing criteria. The baseline mission and other design or operational requirements are reviewed. Analysis tools are described to help understand their interactions and underlying assumptions. Various design and operational conditions are presented and explained for their contribution to defining operational and engine requirements. These identified engine requirements are discussed to suggest which are most critical to the engine sizing and operation. The most-critical engine requirements are compared to in-house NASA engine simulations to try to ascertain which operational requirements define engine requirements versus points within the available engine operational capability. Finally, results are summarized with suggestions for future efforts to improve analysis capabilities, and better define and refine mission and operational requirements.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... without fittings or insulation) suitable for connecting an outdoor air conditioner or heat pump to an..., swaged end, flared end, expanded end, crimped end, threaded), coating (e.g., plastic, paint), insulation...
CONCEPTS AND APPROACHES FOR THE BIOASSESSMENT OF NON-WADEABLE STREAMS AND RIVERS
This document is intended to assist users in establishing or refining protocols, including the specific methods related to field sampling, laboratory sample processing, taxonomy, data entry, management and analysis, and final assessment and reporting. It also reviews and provide...
Towards an international taxonomy of integrated primary care: a Delphi consensus approach.
Valentijn, Pim P; Vrijhoef, Hubertus J M; Ruwaard, Dirk; Boesveld, Inge; Arends, Rosa Y; Bruijnzeels, Marc A
2015-05-22
Developing integrated service models in a primary care setting is considered an essential strategy for establishing a sustainable and affordable health care system. The Rainbow Model of Integrated Care (RMIC) describes the theoretical foundations of integrated primary care. The aim of this study is to refine the RMIC by developing a consensus-based taxonomy of key features. First, the appropriateness of previously identified key features was retested by conducting an international Delphi study that was built on the results of a previous national Delphi study. Second, categorisation of the features among the RMIC integrated care domains was assessed in a second international Delphi study. Finally, a taxonomy was constructed by the researchers based on the results of the three Delphi studies. The final taxonomy consists of 21 key features distributed over eight integration domains which are organised into three main categories: scope (person-focused vs. population-based), type (clinical, professional, organisational and system) and enablers (functional vs. normative) of an integrated primary care service model. The taxonomy provides a crucial differentiation that clarifies and supports implementation, policy formulation and research regarding the organisation of integrated primary care. Further research is needed to develop instruments based on the taxonomy that can reveal the realm of integrated primary care in practice.
Kumar, Avishek; Campitelli, Paul; Thorpe, M F; Ozkan, S Banu
2015-12-01
The most successful protein structure prediction methods to date have been template-based modeling (TBM) or homology modeling, which predicts protein structure based on experimental structures. These high accuracy predictions sometimes retain structural errors due to incorrect templates or a lack of accurate templates in the case of low sequence similarity, making these structures inadequate in drug-design studies or molecular dynamics simulations. We have developed a new physics based approach to the protein refinement problem by mimicking the mechanism of chaperons that rehabilitate misfolded proteins. The template structure is unfolded by selectively (targeted) pulling on different portions of the protein using the geometric based technique FRODA, and then refolded using hierarchically restrained replica exchange molecular dynamics simulations (hr-REMD). FRODA unfolding is used to create a diverse set of topologies for surveying near native-like structures from a template and to provide a set of persistent contacts to be employed during re-folding. We have tested our approach on 13 previous CASP targets and observed that this method of folding an ensemble of partially unfolded structures, through the hierarchical addition of contact restraints (that is, first local and then nonlocal interactions), leads to a refolding of the structure along with refinement in most cases (12/13). Although this approach yields refined models through advancement in sampling, the task of blind selection of the best refined models still needs to be solved. Overall, the method can be useful for improved sampling for low resolution models where certain of the portions of the structure are incorrectly modeled. © 2015 Wiley Periodicals, Inc.
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
2018-06-08
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps.
Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus
2016-07-07
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre
TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less
An Ontology-Based Archive Information Model for the Planetary Science Community
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris
2008-01-01
The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.
Model-based high-throughput design of ion exchange protein chromatography.
Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo
2016-08-12
This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.
Refining As-cast β-Ti Grains Through ZrN Inoculation
NASA Astrophysics Data System (ADS)
Qiu, Dong; Zhang, Duyao; Easton, Mark A.; St John, David H.; Gibson, Mark A.
2018-03-01
The columnar-to-equiaxed transition and remarkable refinement of β-Ti grains occur in an as-cast Ti-13Mo alloy when a new grain refiner, ZrN, was inoculated at a nitrogen level as low as 0.4 wt pct. The grain refining effect is attributed to in situ-formed TiN particles that provide active nucleation sites and solute Zr that promotes constitutional supercooling. Reproducible orientation relationships were identified between TiN nucleants and β-Ti matrix, and well explained by the edge-to-edge matching model.
On the temperature dependence of H-U{sub iso} in the riding hydrogen model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lübben, Jens; Volkmann, Christian; Grabowsky, Simon
The temperature dependence of hydrogen U{sub iso} and parent U{sub eq} in the riding hydrogen model is investigated by neutron diffraction, aspherical-atom refinements and QM/MM and MO/MO cluster calculations. Fixed values of 1.2 or 1.5 appear to be underestimated, especially at temperatures below 100 K. The temperature dependence of H-U{sub iso} in N-acetyl-l-4-hydroxyproline monohydrate is investigated. Imposing a constant temperature-independent multiplier of 1.2 or 1.5 for the riding hydrogen model is found to be inaccurate, and severely underestimates H-U{sub iso} below 100 K. Neutron diffraction data at temperatures of 9, 150, 200 and 250 K provide benchmark results for thismore » study. X-ray diffraction data to high resolution, collected at temperatures of 9, 30, 50, 75, 100, 150, 200 and 250 K (synchrotron and home source), reproduce neutron results only when evaluated by aspherical-atom refinement models, since these take into account bonding and lone-pair electron density; both invariom and Hirshfeld-atom refinement models enable a more precise determination of the magnitude of H-atom displacements than independent-atom model refinements. Experimental efforts are complemented by computing displacement parameters following the TLS+ONIOM approach. A satisfactory agreement between all approaches is found.« less
NASA Technical Reports Server (NTRS)
Whorton, M. S.
1998-01-01
Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.
Initiating technical refinements in high-level golfers: Evidence for contradictory procedures.
Carson, Howie J; Collins, Dave; Richards, Jim
2016-01-01
When developing motor skills there are several outcomes available to an athlete depending on their skill status and needs. Whereas the skill acquisition and performance literature is abundant, an under-researched outcome relates to the refinement of already acquired and well-established skills. Contrary to current recommendations for athletes to employ an external focus of attention and a representative practice design, Carson and Collins' (2011) [Refining and regaining skills in fixation/diversification stage performers: The Five-A Model. International Review of Sport and Exercise Psychology, 4, 146-167. doi: 10.1080/1750984x.2011.613682 ] Five-A Model requires an initial narrowed internal focus on the technical aspect needing refinement: the implication being that environments which limit external sources of information would be beneficial to achieving this task. Therefore, the purpose of this paper was to (1) provide a literature-based explanation for why techniques counter to current recommendations may be (temporarily) appropriate within the skill refinement process and (2) provide empirical evidence for such efficacy. Kinematic data and self-perception reports are provided from high-level golfers attempting to consciously initiate technical refinements while executing shots onto a driving range and into a close proximity net (i.e. with limited knowledge of results). It was hypothesised that greater control over intended refinements would occur when environmental stimuli were reduced in the most unrepresentative practice condition (i.e. hitting into a net). Results confirmed this, as evidenced by reduced intra-individual movement variability for all participants' individual refinements, despite little or no difference in mental effort reported. This research offers coaches guidance when working with performers who may find conscious recall difficult during the skill refinement process.
NASA Astrophysics Data System (ADS)
Bediaga, I.; Miranda, J.; dos Reis, A. C.; Bigi, I. I.; Gomes, A.; Otalora Goicochea, J. M.; Veiga, A.
2012-08-01
The “Miranda procedure” proposed for analyzing Dalitz plots for CP asymmetries in charged B and D decays in a model-independent manner is extended and refined in this paper. The complexity of Cabibbo-Kobayashi-Maskawa CP phenomenology through order λ6 is needed in searches for new dynamics (ND). Detailed analyses of three-body final states offer great advantages: (i) They give us more powerful tools for deciding whether an observed CP asymmetry represents the manifestation of ND and its features. (ii) Many advantages can already be obtained by the Miranda procedure without construction of a detailed Dalitz plot description. (iii) One studies CP asymmetries independent of production asymmetries. We illustrate the power of a second generation Miranda procedure with examples with time integrated rates for Bd/B¯d decays to final states KSπ+π- as trial runs, with comments on B±→K±π+π-/K±K+K-.
KoBaMIN: a knowledge-based minimization web server for protein structure refinement.
Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav
2012-07-01
The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.
Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J
2015-10-01
An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.
Kwok, Ezra; Gopaluni, Bhushan; Kizhakkedathu, Jayachandran N.
2013-01-01
Molecular dynamics (MD) simulations results are herein incorporated into an electrostatic model used to determine the structure of an effective polymer-based antidote to the anticoagulant fondaparinux. In silico data for the polymer or its cationic binding groups has not, up to now, been available, and experimental data on the structure of the polymer-fondaparinux complex is extremely limited. Consequently, the task of optimizing the polymer structure is a daunting challenge. MD simulations provided a means to gain microscopic information on the interactions of the binding groups and fondaparinux that would have otherwise been inaccessible. This was used to refine the electrostatic model and improve the quantitative model predictions of binding affinity. Once refined, the model provided guidelines to improve electrostatic forces between candidate polymers and fondaparinux in order to increase association rate constants. PMID:27006916
NASA Astrophysics Data System (ADS)
Mandal, Arka; Patra, Sudipta; Chakrabarti, Debalay; Singh, Shiv Brat
2017-12-01
A lean duplex stainless steel (LDSS) has been prepared with low-N content and processed by different thermo-mechanical schedules, similar to the industrial processing that comprised hot-rolling, cold-rolling, and annealing treatments. The microstructure developed in the present study on low-N LDSS has been compared to that of high-N LDSS as reported in the literature. As N is an austenite stabilizer, lower-N content reduced the stability of austenite and the austenite content in low-N LDSS with respect to the conventional LDSS. Due to low stability of austenite in low-N LDSS, cold rolling resulted in strain-induced martensitic transformation and the reversion of martensite to austenite during subsequent annealing contributed to significant grain refinement within the austenite regions. δ-ferrite grains in low-N LDSS, on the other hand, are refined by extended recovery mechanism. Initial solidification texture (mainly cube texture) within the δ-ferrite region finally converted into gamma-fiber texture after cold rolling and annealing. Although MS-brass component dominated the austenite texture in low-N LDSS after hot rolling and cold rolling, that even transformed into alpha-fiber texture after the final annealing. Due to the significant grain refinement and formation of beneficial texture within both austenite and ferrite, good combination of strength and ductility has been achieved in cold-rolled and annealed sample of low-N LDSS steel.
Orbit Refinement of Asteroids and Comets Using a Robotic Telescope Network
NASA Astrophysics Data System (ADS)
Lantz Caughey, Austin; Brown, Johnny; Puckett, Andrew W.; Hoette, Vivian L.; Johnson, Michael; McCarty, Cameron B.; Whitmore, Kevin; UNC-Chapel Hill SKYNET Team
2016-01-01
We report on a multi-semester project to refine the orbits of asteroids and comets in our Solar System. One of the newest fields of research for undergraduate Astrophysics students at Columbus State University is that of asteroid astrometry. By measuring the positions of an asteroid in a set of images, we can reduce the overall uncertainty in the accepted orbital parameters of that object. These measurements, using our WestRock Observatory (WRO) and several other telescopes around the world, are being published through the Minor Planet Center (MPC) and benefit the global community.Three different methods are used to obtain these observations. First, we use our own 24-inch telescope at WRO, located in at CSU's Coca-Cola Space Science Center in downtown Columbus, Georgia . Second, we have access to data from the 20-inch telescope at Stone Edge Observatory in El Verano, California. Finally, we may request images remotely using Skynet, an online worldwide network of robotic telescopes. Our primary and long-time collaborator on Skynet has been the "41-inch" reflecting telescope at Yerkes Observatory in Williams Bay, Wisconsin. Thus far, we have used these various telescopes to refine the orbits of more than 15 asteroids and comets. We have also confirmed the resulting reduction in orbit-model uncertainties using Monte Carlo simulations and orbit visualizations, using Find_Orb and OrbitMaster software, respectively.Before any observatory site can be used for official orbit refinement projects, it must first become a trusted source of astrometry data for the MPC. We have therefore obtained Observatory Codes not only for our own WestRock Observatory (W22), but also for 3 Skynet telescopes that we may use in the future: Dark Sky Observatory in Boone, North Carolina (W38) Hume Observatory in Santa Rosa, California (U54) and Athabasca University Geophysical Observatory in Athabasca, Alberta, Canada (U96).
Massive Joint Multinational Exercise Planning to Solve Army Warfighting Challenges
2016-06-10
and military sustainment occurs for various reasons, such as physical distance between offices, or a lack of institutional knowledge about Army...this thesis. Thank you to the entire library staff. A final thank you to LTC Toni Sabo for her expert review of the final paper. Your knowledge of the... English language reminded me how much I need to continue to refine and hone my skills. Thank you for your support and leadership in our staff group
Bellows flow-induced vibrations
NASA Technical Reports Server (NTRS)
Tygielski, P. J.; Smyly, H. M.; Gerlach, C. R.
1983-01-01
The bellows flow excitation mechanism and results of comprehensive test program are summarized. The analytical model for predicting bellows flow induced stress is refined. The model includes the effects of an upstream elbow, arbitrary geometry, and multiple piles. A refined computer code for predicting flow induced stress is described which allows life prediction if a material S-N diagram is available.
MODELING AND ANALYSIS OF FISSION PRODUCT TRANSPORT IN THE AGR-3/4 EXPERIMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humrickhouse, Paul W.; Collin, Blaise P.; Hawkes, Grant L.
In this work we describe the ongoing modeling and analysis efforts in support of the AGR-3/4 experiment. AGR-3/4 is intended to provide data to assess fission product retention and transport (e.g., diffusion coefficients) in fuel matrix and graphite materials. We describe a set of pre-test predictions that incorporate the results of detailed thermal and fission product release models into a coupled 1D radial diffusion model of the experiment, using diffusion coefficients reported in the literature for Ag, Cs, and Sr. We make some comparisons of the predicted Cs profiles to preliminary measured data for Cs and find these to bemore » reasonable, in most cases within an order of magnitude. Our ultimate objective is to refine the diffusion coefficients using AGR-3/4 data, so we identify an analytical method for doing so and demonstrate its efficacy via a series of numerical experiments using the model predictions. Finally, we discuss development of a post-irradiation examination plan informed by the modeling effort and simulate some of the heating tests that are tentatively planned.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanratty, M.P.; Liber, K.
1994-12-31
The Littoral Ecosystem Risk Assessment Model (LERAM) is a bioenergetic ecosystem effects model. It links single species toxicity data to a bioenergetic model of the trophic structure of an ecosystem in order to simulate community and ecosystem level effects of chemical stressors. LERAM was used in 1992 to simulate the ecological effects of diflubenzuron. When compared to the results from a littoral enclosure study, the model exaggerated the cascading of effects through the trophic levels of the littoral ecosystem. It was hypothesized that this could be corrected by making minor changes in the representation of the littoral food web. Twomore » refinements of the model were therefore performed: (1) the plankton and macroinvertebrate model populations [eg., predatory Copepoda, herbivorous Insecta, green phytoplankton, etc.] were changed to better represent the habitat and feeding preferences of the endemic taxa; and (2) the method for modeling the microbial degradation of detritus (and the resulting nutrient remineralization) was changed from simulating bacterial populations to simulating bacterial function. Model predictions of the ecological effects of 4-nonylphenol were made before and after these refinements. Both sets of predictions were then compared to the results from a littoral enclosure study of the ecological effects of 4-nonylphenol. The changes in the LERAM predictions were then used to determine the success of the refinements, to guide. future research, and to further define LERAM`s domain of application.« less
DOT National Transportation Integrated Search
2008-10-01
The FHWA has strongly encouraged transportation departments to display travel times on their Dynamic Message Signs (DMS). The Oregon : Department of Transportation (ODOT) currently displays travel time estimates on three DMSs in the Portland metropol...
DOT National Transportation Integrated Search
2011-03-01
This project addressed several aspects of the LOSPLAN software, primarily with respect to incorporating : new FDOT and NCHRP research project results. In addition, some existing computational methodology : aspects were refined to provide more accurat...
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.
One technique for refining the global Earth gravity models
NASA Astrophysics Data System (ADS)
Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.
2017-01-01
The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.
Volta phase plate data collection facilitates image processing and cryo-EM structure determination.
von Loeffelholz, Ottilie; Papai, Gabor; Danev, Radostin; Myasnikov, Alexander G; Natchiar, S Kundhavai; Hazemann, Isabelle; Ménétret, Jean-François; Klaholz, Bruno P
2018-06-01
A current bottleneck in structure determination of macromolecular complexes by cryo electron microscopy (cryo-EM) is the large amount of data needed to obtain high-resolution 3D reconstructions, including through sorting into different conformations and compositions with advanced image processing. Additionally, it may be difficult to visualize small ligands that bind in sub-stoichiometric levels. Volta phase plates (VPP) introduce a phase shift in the contrast transfer and drastically increase the contrast of the recorded low-dose cryo-EM images while preserving high frequency information. Here we present a comparative study to address the behavior of different data sets during image processing and quantify important parameters during structure refinement. The automated data collection was done from the same human ribosome sample either as a conventional defocus range dataset or with a Volta phase plate close to focus (cfVPP) or with a small defocus (dfVPP). The analysis of image processing parameters shows that dfVPP data behave more robustly during cryo-EM structure refinement because particle alignments, Euler angle assignments and 2D & 3D classifications behave more stably and converge faster. In particular, less particle images are required to reach the same resolution in the 3D reconstructions. Finally, we find that defocus range data collection is also applicable to VPP. This study shows that data processing and cryo-EM map interpretation, including atomic model refinement, are facilitated significantly by performing VPP cryo-EM, which will have an important impact on structural biology. Copyright © 2018 Elsevier Inc. All rights reserved.
Nakonechny, Joanne; Cragg, Jacquelyn J.; Ramer, Matt S.
2010-01-01
To improve science learning, science educators' teaching tools need to address two major criteria: teaching practice should mirror our current understanding of the learning process; and science teaching should reflect scientific practice. We designed a small-group learning (SGL) model for a fourth year university neurobiology course using these criteria and studied student achievement and attitude in five course sections encompassing the transition from individual work-based to SGL course design. All students completed daily quizzes/assignments involving analysis of scientific data and the development of scientific models. Students in individual work-based (Individualistic) sections usually worked independently on these assignments, whereas SGL students completed assignments in permanent groups of six. SGL students had significantly higher final exam grades than Individualistic students. The transition to the SGL model was marked by a notable increase in 10th percentile exam grade (Individualistic: 47.5%; Initial SGL: 60%; Refined SGL: 65%), suggesting SGL enhanced achievement among the least prepared students. We also studied student achievement on paired quizzes: quizzes were first completed individually and submitted, and then completed as a group and submitted. The group quiz grade was higher than the individual quiz grade of the highest achiever in each group over the term. All students – even term high achievers –could benefit from the SGL environment. Additionally, entrance and exit surveys demonstrated student attitudes toward SGL were more positive at the end of the Refined SGL course. We assert that SGL is uniquely-positioned to promote effective learning in the science classroom. PMID:21209910
Gaudet, Andrew D; Ramer, Leanne M; Nakonechny, Joanne; Cragg, Jacquelyn J; Ramer, Matt S
2010-12-29
To improve science learning, science educators' teaching tools need to address two major criteria: teaching practice should mirror our current understanding of the learning process; and science teaching should reflect scientific practice. We designed a small-group learning (SGL) model for a fourth year university neurobiology course using these criteria and studied student achievement and attitude in five course sections encompassing the transition from individual work-based to SGL course design. All students completed daily quizzes/assignments involving analysis of scientific data and the development of scientific models. Students in individual work-based (Individualistic) sections usually worked independently on these assignments, whereas SGL students completed assignments in permanent groups of six. SGL students had significantly higher final exam grades than Individualistic students. The transition to the SGL model was marked by a notable increase in 10th percentile exam grade (Individualistic: 47.5%; Initial SGL: 60%; Refined SGL: 65%), suggesting SGL enhanced achievement among the least prepared students. We also studied student achievement on paired quizzes: quizzes were first completed individually and submitted, and then completed as a group and submitted. The group quiz grade was higher than the individual quiz grade of the highest achiever in each group over the term. All students--even term high achievers--could benefit from the SGL environment. Additionally, entrance and exit surveys demonstrated student attitudes toward SGL were more positive at the end of the Refined SGL course. We assert that SGL is uniquely-positioned to promote effective learning in the science classroom.
Mehl, Steffen W.; Hill, Mary C.
2011-01-01
This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.
Brunton, Ginny; Thomas, James; O'Mara-Eves, Alison; Jamal, Farah; Oliver, Sandy; Kavanagh, Josephine
2017-12-11
Government policy increasingly supports engaging communities to promote health. It is critical to consider whether such strategies are effective, for whom, and under what circumstances. However, 'community engagement' is defined in diverse ways and employed for different reasons. Considering the theory and context we developed a conceptual framework which informs understanding about what makes an effective (or ineffective) community engagement intervention. We conducted a systematic review of community engagement in public health interventions using: stakeholder involvement; searching, screening, appraisal and coding of research literature; and iterative thematic syntheses and meta-analysis. A conceptual framework of community engagement was refined, following interactions between the framework and each review stage. From 335 included reports, three products emerged: (1) two strong theoretical 'meta-narratives': one, concerning the theory and practice of empowerment/engagement as an independent objective; and a more utilitarian perspective optimally configuring health services to achieve defined outcomes. These informed (2) models that were operationalized in subsequent meta-analysis. Both refined (3) the final conceptual framework. This identified multiple dimensions by which community engagement interventions may differ. Diverse combinations of intervention purpose, theory and implementation were noted, including: ways of defining communities and health needs; initial motivations for community engagement; types of participation; conditions and actions necessary for engagement; and potential issues influencing impact. Some dimensions consistently co-occurred, leading to three overarching models of effective engagement which either: utilised peer-led delivery; employed varying degrees of collaboration between communities and health services; or built on empowerment philosophies. Our conceptual framework and models are useful tools for considering appropriate and effective approaches to community engagement. These should be tested and adapted to facilitate intervention design and evaluation. Using this framework may disentangle the relative effectiveness of different models of community engagement, promoting effective, sustainable and appropriate initiatives.
Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.
1999-01-01
This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.
Orth, Ulrich; Robins, Richard W; Meier, Laurenz L; Conger, Rand D
2016-01-01
A growing body of research supports the vulnerability model of low self-esteem and depression, which states that low self-esteem is a risk factor for depression. The goal of the present research was to refine the vulnerability model, by testing whether the self-esteem effect is truly due to a lack of genuine self-esteem or due to a lack of narcissistic self-enhancement. For the analyses, we used data from 6 longitudinal studies consisting of 2,717 individuals. In each study, we tested the prospective effects of self-esteem and narcissism on depression both separately for each construct and mutually controlling the constructs for each other (i.e., a strategy that informs about effects of genuine self-esteem and pure narcissism), and then meta-analytically aggregated the findings. The results indicated that the effect of low self-esteem holds when narcissism is controlled for (uncontrolled effect = -.26, controlled effect = -.27). In contrast, the effect of narcissism was close to zero when self-esteem was controlled for (uncontrolled effect = -.06, controlled effect = .01). Moreover, the analyses suggested that the self-esteem effect is linear across the continuum from low to high self-esteem (i.e., the effect was not weaker at very high levels of self-esteem). Finally, self-esteem and narcissism did not interact in their effect on depression; that is, individuals with high self-esteem have a lower risk for developing depression, regardless of whether or not they are narcissistic. The findings have significant theoretical implications because they strengthen the vulnerability model of low self-esteem and depression. (c) 2016 APA, all rights reserved).
Progress and challenges in coupled hydrodynamic-ecological estuarine modeling.
Ganju, Neil K; Brush, Mark J; Rashleigh, Brenda; Aretxabaleta, Alfredo L; Del Barrio, Pilar; Grear, Jason S; Harris, Lora A; Lake, Samuel J; McCardell, Grant; O'Donnell, James; Ralston, David K; Signell, Richard P; Testa, Jeremy M; Vaudrey, Jamie M P
2016-03-01
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear, because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a "theory of everything" for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.
Progress and challenges in coupled hydrodynamic-ecological estuarine modeling
Ganju, Neil K.; Brush, Mark J.; Rashleigh, Brenda; Aretxabaleta, Alfredo L.; del Barrio, Pilar; Grear, Jason S.; Harris, Lora A.; Lake, Samuel J.; McCardell, Grant; O'Donnell, James; Ralston, David K.; Signell, Richard P.; Testa, Jeremy; Vaudrey, Jamie M. P.
2016-01-01
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review, we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy.
Progress and challenges in coupled hydrodynamic-ecological estuarine modeling
Ganju, Neil K.; Brush, Mark J.; Rashleigh, Brenda; Aretxabaleta, Alfredo L.; del Barrio, Pilar; Grear, Jason S.; Harris, Lora A.; Lake, Samuel J.; McCardell, Grant; O’Donnell, James; Ralston, David K.; Signell, Richard P.; Testa, Jeremy M.; Vaudrey, Jamie M.P.
2016-01-01
Numerical modeling has emerged over the last several decades as a widely accepted tool for investigations in environmental sciences. In estuarine research, hydrodynamic and ecological models have moved along parallel tracks with regard to complexity, refinement, computational power, and incorporation of uncertainty. Coupled hydrodynamic-ecological models have been used to assess ecosystem processes and interactions, simulate future scenarios, and evaluate remedial actions in response to eutrophication, habitat loss, and freshwater diversion. The need to couple hydrodynamic and ecological models to address research and management questions is clear, because dynamic feedbacks between biotic and physical processes are critical interactions within ecosystems. In this review we present historical and modern perspectives on estuarine hydrodynamic and ecological modeling, consider model limitations, and address aspects of model linkage, skill assessment, and complexity. We discuss the balance between spatial and temporal resolution and present examples using different spatiotemporal scales. Finally, we recommend future lines of inquiry, approaches to balance complexity and uncertainty, and model transparency and utility. It is idealistic to think we can pursue a “theory of everything” for estuarine models, but recent advances suggest that models for both scientific investigations and management applications will continue to improve in terms of realism, precision, and accuracy. PMID:27721675
A Mathematical Model for Railway Control Systems
NASA Technical Reports Server (NTRS)
Hoover, D. N.
1996-01-01
We present a general method for modeling safety aspects of railway control systems. Using our modeling method, one can progressively refine an abstract railway safety model, sucessively adding layers of detail about how a real system actually operates, while maintaining a safety property that refines the original abstract safety property. This method supports a top-down approach to specification of railway control systems and to proof of a variety of safety-related properties. We demonstrate our method by proving safety of the classical block control system.
Modeling of the Coupling of Microstructure and Macrosegregation in a Direct Chill Cast Al-Cu Billet
NASA Astrophysics Data System (ADS)
Heyvaert, Laurent; Bedel, Marie; Založnik, Miha; Combeau, Hervé
2017-10-01
The macroscopic multiphase flow and the growth of the solidification microstructures in the mushy zone of a direct chill (DC) casting are closely coupled. These couplings are the key to the understanding of the formation of the macrosegregation and of the non-uniform microstructure of the casting. In the present paper we use a multiphase and multiscale model to provide a fully coupled picture of the links between macrosegregation and microstructure in a DC cast billet. The model describes nucleation from inoculant particles and growth of dendritic and globular equiaxed crystal grains, fully coupled with macroscopic transport phenomena: fluid flow induced by natural convection and solidification shrinkage, heat, mass, and solute mass transport, motion of free-floating equiaxed grains, and of grain refiner particles. We compare our simulations to experiments on grain-refined and non-grain-refined industrial size billets from literature. We show that a transition between dendritic and globular grain morphology triggered by the grain refinement is the key to the explanation of the differences between the macrosegregation patterns in the two billets. We further show that the grain size and morphology are strongly affected by the macroscopic transport of free-floating equiaxed grains and of grain refiner particles.
NASA Astrophysics Data System (ADS)
Choi, Ho-Gil; Shim, Moonsoo; Lee, Jong-Hyeon; Yi, Kyung-Woo
2017-09-01
The waste salt treatment process is required for the reuse of purified salts, and for the disposal of the fission products contained in waste salt during pyroprocessing. As an alternative to existing fission product separation methods, the horizontal zone refining process is used in this study for the purification of waste salt. In order to evaluate the purification ability of the process, three-dimensional simulation is conducted, considering heat transfer, melt flow, and mass transfer. Impurity distributions and decontamination factors are calculated as a function of the heater traverse rate, by applying a subroutine and the equilibrium segregation coefficient derived from the effective segregation coefficients. For multipass cases, 1d solutions and the effective segregation coefficient obtained from three-dimensional simulation are used. In the present study, the topic is not dealing with crystal growth, but the numerical technique used is nearly the same since the zone refining technique was just introduced in the treatment of waste salt from nuclear power industry because of its merit of simplicity and refining ability. So this study can show a new application of single crystal growth techniques to other fields, by taking advantage of the zone refining multipass possibility. The final goal is to achieve the same high degree of decontamination in the waste salt as in zone freezing (or reverse Bridgman) method.
NASA Astrophysics Data System (ADS)
Simon, R. E.; Wright, C.; Kwadiba, M. T. O.; Kgaswane, E. M.
2003-12-01
Average one-dimensional P and S wavespeed models from the surface to depths of 800 km were derived for the southern African region using travel times and waveforms from earthquakes recorded at stations of the Kaapvaal and South African seismic networks. The Herglotz-Wiechert method combined with ray tracing was used to derive a preliminary P wavespeed model, followed by refinements using phase-weighted stacking and synthetic seismograms to yield the final model. Travel times combined with ray tracing were used to derive the S wavespeed model, which was also refined using phase-weighted stacking and synthetic seismograms. The presence of a high wavespeed upper mantle lid in the S model overlying a low wavespeed zone (LWZ) around 210- to ˜345-km depth that is not observed in the P wavespeed model was inferred. The 410-km discontinuity shows similar characteristics to that in other continental regions, but occurs slightly deeper at 420 km. Depletion of iron and/or enrichment in aluminium relative to other regions are the preferred explanation, since the P wavespeeds throughout the transition zone are slightly higher than average. The average S wavespeed structure beneath southern Africa within and below the transition zone is similar to that of the IASP91 model. There is no evidence for discontinuity at 520-km depth. The 660-km discontinuity also appears to be slightly deeper than average (668 km), although the estimated thickness of the transition zone is 248 km, similar to the global average of 241 km. The small size of the 660-km discontinuity for P waves, compared with many other regions, suggests that interpretation of the discontinuity as the transformation of spinel to perovskite and magnesiowüstite may require modification. Alternative explanations include the presence of garnetite-rich material or ilmenite-forming phase transformations above the 660-km discontinuity, and the garnet-perovskite transformation as the discontinuity.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Barati, Mohammad Reza
2017-12-01
This paper develops a higher order refined beam model with a parabolic shear strain function for vibration analysis of porous nanocrystalline nanobeams based on nonlocal couple stress theory. Nanocrystalline nanobeam is composed from three phases which are nano-grains, nano-voids and interface. Nano-voids or porosities inside the material have a stiffness-softening impact on the nanobeam. Nonlocal elasticity theory of Eringen is applied in analysis of nanocrystalline nanobeams for the first time. Also, modified couple stress theory is employed to capture grains rigid rotations. The governing equations obtained from Hamilton's principle are solved applying an analytical approach which satisfies various boundary conditions. The reliability of present approach is verified by comparing obtained results with those provided in literature. Finally the influences of nonlocal parameter, couple stress, grain size, porosities and shear deformation on the vibration characteristics of nanocrystalline nanobeams are explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mushkatel, A.H.; Conway, S.; Navis, I.
2006-07-01
This paper focuses on the difficulties of projecting fiscal impacts to public safety agencies from the proposed high-level nuclear waste repository at Yucca Mountain, Nevada. The efforts made by Clark County Nevada, to develop a fiscal model of impacts for public safety agencies are described in this paper. Some of the difficulties in constructing a fiscal model of impacts for the entire 24 year high-level nuclear waste transportation shipping campaign are identified, and a refined methodology is provided to accomplish this task. Finally, a comparison of the fiscal impact projections for public safety agencies that Clark County developed in 2001,more » with those done in 2005 is discussed, and the fiscal impact cost projections for the entire 24 year transportation campaign are provided. (authors)« less
Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model
NASA Astrophysics Data System (ADS)
Lee, Myungeun; Kim, Jong Hyo
2012-02-01
Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.
Assimilation of NUCAPS Retrieved Profiles in GSI for Unique Forecasting Applications
NASA Technical Reports Server (NTRS)
Berndt, Emily Beth; Zavodsky, Bradley; Srikishen, Jayanthi; Blankenship, Clay
2015-01-01
Hyperspectral IR profiles can be assimilated in GSI as a separate observation other than radiosondes with only changes to tables in the fix directory. Assimilation of profiles does produce changes to analysis fields and evidenced by: Innovations larger than +/-2.0 K are present and represent where individual profiles impact the final temperature analysis.The updated temperature analysis is colder behind the cold front and warmer in the warm sector. The updated moisture analysis is modified more in the low levels and tends to be drier than the original model background Analysis of model output shows: Differences relative to 13-km RAP analyses are smaller when profiles are assimilated with NUCAPS errors. CAPE is under-forecasted when assimilating NUCAPS profiles, which could be problematic for severe weather forecasting Refining the assimilation technique to incorporate an error covariance matrix and creating a separate GSI module to assimilate satellite profiles may improve results.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-07
... and refined analyses of data, producing observational findings, and creating other sources of research... Services announces priorities and definitions for the Disability and Rehabilitation Research Projects and Centers Program administered by the National Institute on Disability and Rehabilitation Research (NIDRR...
78 FR 46677 - Environmental Impact Statement; Calcasieu Parish, LA
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-01
... between the I-210 interchanges including the Calcasieu River Bridge. A feasibility and environmental study... project. The feasibility study involved four phases: (1) Information and Data Gathering; (2) Preliminary Study; (3) Refined Alternatives; and (4) Preparation and Submission of a Final Report. Based on the...
Diffraction-geometry refinement in the DIALS framework
Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...
2016-03-30
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less
NASA Astrophysics Data System (ADS)
Geža, V.; Venčels, J.; Zāģeris, Ģ.; Pavlovs, S.
2018-05-01
One of the most perspective methods to produce SoG-Si is refinement via metallurgical route. The most critical part of this route is refinement from boron and phosphorus, therefore, approach under development will address this problem. An approach of creating surface waves on silicon melt’s surface is proposed in order to enlarge its area and accelerate removal of boron via chemical reactions and evaporation of phosphorus. A two dimensional numerical model is created which include coupling of electromagnetic and fluid dynamic simulations with free surface dynamics. First results show behaviour similar to experimental results from literature.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
NASA Astrophysics Data System (ADS)
Chou, Tzu-Ting; Chen, Wei-Yu; Fleshman, Collin Jordon; Duh, Jenq-Gong
2018-03-01
A fine-grain structure with random orientations of lead-free solder joints was successfully obtained in this study. The Sn-Ag-Cu solder alloys doped with minor Ni were reflowed with Ni-based or Cu-based substrates to fabricate the joints containing different Ni content. Adding 0.1 wt.% Ni into the solder effectively promoted the formation of fine Sn grains, and reflowing with Ni-based substrates further enhanced the effects of β-Sn grain refinement. The crystallographic characteristics and the microstructures were analyzed to identify the solidification mechanism of different types of microstructure in the joints. The phase precipitating order in the joint altered as the solder composition were modified by elemental doping and changing substrate, which significantly affected the efficiency of grain refinement and the final grain structure. The formation mechanism of fine β-Sn grains in the Ni-doped joint with a Ni-based substrate is attributable to the heterogeneous nucleation by Ni, whereas the Ni in the joint using ChouCu-based substrate is consumed to form an intermetallic compound and thus retard the effect of grain refining.
Research-Based Program Development: Refining the Service Model for a Geographic Alliance
ERIC Educational Resources Information Center
Rutherford, David J.; Lovorn, Carley
2018-01-01
Research conducted in 2013 identified the perceptions that K-12 teachers and administrators hold with respect to: (1) the perceived needs in education, (2) the professional audiences that are most important to reach, and (3) the service models that are most effective. The specific purpose of the research was to refine and improve the services that…
A method to estimate statistical errors of properties derived from charge-density modelling
Lecomte, Claude
2018-01-01
Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964
2016-09-01
UNCLASSIFIED UNCLASSIFIED Refinement of Out of Circularity and Thickness Measurements of a Cylinder for Finite Element Analysis...significant effect on the collapse strength and must be accurately represented in finite element analysis to obtain accurate results. Often it is necessary...to interpolate measurements from a relatively coarse grid to a refined finite element model and methods that have wide general acceptance are
Zhang, Yang
2014-01-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925
Zhang, Yang
2014-02-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.
2005-01-01
This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.
Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed
2012-11-01
New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.
Xu, Hong-Ping; Burbridge, Timothy J.; Ye, Meijun; Chen, Minggang; Ge, Xinxin; Zhou, Z. Jimmy
2016-01-01
Retinal waves are correlated bursts of spontaneous activity whose spatiotemporal patterns are critical for early activity-dependent circuit elaboration and refinement in the mammalian visual system. Three separate developmental wave epochs or stages have been described, but the mechanism(s) of pattern generation of each and their distinct roles in visual circuit development remain incompletely understood. We used neuroanatomical, in vitro and in vivo electrophysiological, and optical imaging techniques in genetically manipulated mice to examine the mechanisms of wave initiation and propagation and the role of wave patterns in visual circuit development. Through deletion of β2 subunits of nicotinic acetylcholine receptors (β2-nAChRs) selectively from starburst amacrine cells (SACs), we show that mutual excitation among SACs is critical for Stage II (cholinergic) retinal wave propagation, supporting models of wave initiation and pattern generation from within a single retinal cell type. We also demonstrate that β2-nAChRs in SACs, and normal wave patterns, are necessary for eye-specific segregation. Finally, we show that Stage III (glutamatergic) retinal waves are not themselves necessary for normal eye-specific segregation, but elimination of both Stage II and Stage III retinal waves dramatically disrupts eye-specific segregation. This suggests that persistent Stage II retinal waves can adequately compensate for Stage III retinal wave loss during the development and refinement of eye-specific segregation. These experiments confirm key features of the “recurrent network” model for retinal wave propagation and clarify the roles of Stage II and Stage III retinal wave patterns in visual circuit development. SIGNIFICANCE STATEMENT Spontaneous activity drives early mammalian circuit development, but the initiation and patterning of activity vary across development and among modalities. Cholinergic “retinal waves” are initiated in starburst amacrine cells and propagate to retinal ganglion cells and higher-order visual areas, but the mechanism responsible for creating their unique and critical activity pattern is incompletely understood. We demonstrate that cholinergic wave patterns are dictated by recurrent connectivity within starburst amacrine cells, and retinal ganglion cells act as “readouts” of patterned activity. We also show that eye-specific segregation occurs normally without glutamatergic waves, but elimination of both cholinergic and glutamatergic waves completely disrupts visual circuit development. These results suggest that each retinal wave pattern during development is optimized for concurrently refining multiple visual circuits. PMID:27030771
Refined views of multi-protein complexes in the erythrocyte membrane
Mankelow, TJ; Satchwell, TJ; Burton, NM
2015-01-01
The erythrocyte membrane has been extensively studied, both as a model membrane system and to investigate its role in gas exchange and transport. Much is now known about the protein components of the membrane, how they are organised into large multi-protein complexes and how they interact with each other within these complexes. Many links between the membrane and the cytoskeleton have also been delineated and have been demonstrated to be crucial for maintaining the deformability and integrity of the erythrocyte. In this study we have refined previous, highly speculative molecular models of these complexes by including the available data pertaining to known protein-protein interactions. While the refined models remain highly speculative, they provide an evolving framework for visualisation of these important cellular structures at the atomic level. PMID:22465511
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Pharmacophore modeling, virtual screening and molecular docking of ATPase inhibitors of HSP70.
Sangeetha, K; Sasikala, R P; Meena, K S
2017-10-01
Heat shock protein 70 is an effective anticancer target as it influences many signaling pathways. Hence the study investigated the important pharmacophore feature required for ATPase inhibitors of HSP70 by generating a ligand based pharmacophore model followed by virtual based screening and subsequent validation by molecular docking in Discovery studio V4.0. The most extrapolative pharmacophore model (hypotheses 8) consisted of four hydrogen bond acceptors. Further validation by external test set prediction identified 200 hits from Mini Maybridge, Drug Diverse, SCPDB compounds and Phytochemicals. Consequently, the screened compounds were refined by rule of five, ADMET and molecular docking to retain the best competitive hits. Finally Phytochemical compounds Muricatetrocin B, Diacetylphiladelphicalactone C, Eleutheroside B and 5-(3-{[1-(benzylsulfonyl)piperidin-4-yl]amino}phenyl)- 4-bromo-3-(carboxymethoxy)thiophene-2-carboxylic acid were obtained as leads to inhibit the ATPase activity of HSP70 in our findings and thus can be proposed for further in vitro and in vivo evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling and fabrication of 4H-SiC Schottky junction
NASA Astrophysics Data System (ADS)
Martychowiec, A.; Pedryc, A.; Kociubiński, A.
2017-08-01
The rapidly growing demand for electronic devices requires using of alternative semiconductor materials, which could replace conventional silicon. Silicon carbide has been proposed for these harsh environment applications (high temperature, high voltage, high power conditions) because of its wide bandgap, its high temperature operation ability, its excellent thermal and chemical stability, and its high breakdown electric field strength. The Schottky barrier diode (SBD) is known as one of the best refined SiC devices. This paper presents prepared model, simulations and description of technology of 4H-SiC Schottky junction as well as characterization of fabricated structures. The future aim of the application of the structures is an optical detection of an ultraviolet radiation. The model section contains a comparison of two different solutions of SBD's construction. Simulations - as a crucial process of designing electronic devices - have been performed using the ATLAS device of Silvaco TCAD software. As a final result the paper shows I-V characteristics of fabricated diodes.
Using Inverse Problem Methods with Surveillance Data in Pneumococcal Vaccination
Sutton, Karyn L.; Banks, H. T.; Castillo-Chavez, Carlos
2010-01-01
The design and evaluation of epidemiological control strategies is central to public health policy. While inverse problem methods are routinely used in many applications, this remains an area in which their use is relatively rare, although their potential impact is great. We describe methods particularly relevant to epidemiological modeling at the population level. These methods are then applied to the study of pneumococcal vaccination strategies as a relevant example which poses many challenges common to other infectious diseases. We demonstrate that relevant yet typically unknown parameters may be estimated, and show that a calibrated model may used to assess implemented vaccine policies through the estimation of parameters if vaccine history is recorded along with infection and colonization information. Finally, we show how one might determine an appropriate level of refinement or aggregation in the age-structured model given age-stratified observations. These results illustrate ways in which the collection and analysis of surveillance data can be improved using inverse problem methods. PMID:20209093
DISSECTING OCD CIRCUITS: FROM ANIMAL MODELS TO TARGETED TREATMENTS.
Ahmari, Susanne E; Dougherty, Darin D
2015-08-01
Obsessive-compulsive disorder (OCD) is a chronic, severe mental illness with up to 2-3% prevalence worldwide. In fact, OCD has been classified as one of the world's 10 leading causes of illness-related disability according to the World Health Organization, largely because of the chronic nature of disabling symptoms.([1]) Despite the severity and high prevalence of this chronic and disabling disorder, there is still relatively limited understanding of its pathophysiology. However, this is now rapidly changing due to development of powerful technologies that can be used to dissect the neural circuits underlying pathologic behaviors. In this article, we describe recent technical advances that have allowed neuroscientists to start identifying the circuits underlying complex repetitive behaviors using animal model systems. In addition, we review current surgical and stimulation-based treatments for OCD that target circuit dysfunction. Finally, we discuss how findings from animal models may be applied in the clinical arena to help inform and refine targeted brain stimulation-based treatment approaches. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
SFESA: a web server for pairwise alignment refinement by secondary structure shifts.
Tong, Jing; Pei, Jimin; Grishin, Nick V
2015-09-03
Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.
Plasma Vehicle Charging Analysis for Orion Flight Test 1
NASA Technical Reports Server (NTRS)
Lallement, L.; McDonald, T.; Norgard, J.; Scully, B.
2014-01-01
In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the thermal protection system, but would definitely be required for future GEO, trans-lunar, and extra-lunar missions...
Plasma Vehicle Charging Analysis for Orion Flight Test 1
NASA Technical Reports Server (NTRS)
Scully, B.; Norgard, J.
2015-01-01
In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the Thermal Protection System (TPS), but would definitely be required for future GEO, trans-lunar, and extra-lunar missions.
Zheng, Zhong-liang; Zuo, Zhen-yu; Liu, Zhi-gang; Tsai, Keng-chang; Liu, Ai-fu; Zou, Guo-lin
2005-01-01
A three-dimensional structural model of nattokinase (NK) from Bacillus natto was constructed by homology modeling. High-resolution X-ray structures of Subtilisin BPN' (SB), Subtilisin Carlsberg (SC), Subtilisin E (SE) and Subtilisin Savinase (SS), four proteins with sequential, structural and functional homology were used as templates. Initial models of NK were built by MODELLER and analyzed by the PROCHECK programs. The best quality model was chosen for further refinement by constrained molecular dynamics simulations. The overall quality of the refined model was evaluated. The refined model NKC1 was analyzed by different protein analysis programs including PROCHECK for the evaluation of Ramachandran plot quality, PROSA for testing interaction energies and WHATIF for the calculation of packing quality. This structure was found to be satisfactory and also stable at room temperature as demonstrated by a 300ps long unconstrained molecular dynamics (MD) simulation. Further docking analysis promoted the coming of a new nucleophilic catalytic mechanism for NK, which is induced by attacking of hydroxyl rich in catalytic environment and locating of S221.
Refining the treatment of membrane proteins by coarse-grained models.
Vorobyov, Igor; Kim, Ilsoo; Chu, Zhen T; Warshel, Arieh
2016-01-01
Obtaining a quantitative description of the membrane proteins stability is crucial for understanding many biological processes. However the advance in this direction has remained a major challenge for both experimental studies and molecular modeling. One of the possible directions is the use of coarse-grained models but such models must be carefully calibrated and validated. Here we use a recent progress in benchmark studies on the energetics of amino acid residue and peptide membrane insertion and membrane protein stability in refining our previously developed coarse-grained model (Vicatos et al., Proteins 2014;82:1168). Our refined model parameters were fitted and/or tested to reproduce water/membrane partitioning energetics of amino acid side chains and a couple of model peptides. This new model provides a reasonable agreement with experiment for absolute folding free energies of several β-barrel membrane proteins as well as effects of point mutations on a relative stability for one of those proteins, OmpLA. The consideration and ranking of different rotameric states for a mutated residue was found to be essential to achieve satisfactory agreement with the reference data. © 2015 Wiley Periodicals, Inc.
On the impact of a refined stochastic model for airborne LiDAR measurements
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig
2016-09-01
Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.
Refining Students' Explanations of an Unfamiliar Physical Phenomenon-Microscopic Friction
NASA Astrophysics Data System (ADS)
Corpuz, Edgar De Guzman; Rebello, N. Sanjay
2017-08-01
The first phase of this multiphase study involves modeling of college students' thinking of friction at the microscopic level. Diagnostic interviews were conducted with 11 students with different levels of physics backgrounds. A phenomenographic approach of data analysis was used to generate categories of responses which subsequently were used to generate a model of explanation. Most of the students interviewed consistently used mechanical interactions in explaining microscopic friction. According to these students, friction is due to the interlocking or rubbing of atoms. Our data suggest that students' explanations of microscopic friction are predominantly influenced by their macroscopic experiences. In the second phase of the research, teaching experiment was conducted with 18 college students to investigate how students' explanations of microscopic friction can be refined by a series of model-building activities. Data were analyzed using Redish's two-level transfer framework. Our results show that through sequences of hands-on and minds-on activities, including cognitive dissonance and resolution, it is possible to facilitate the refinement of students' explanations of microscopic friction. The activities seemed to be productive in helping students activate associations that refine their ideas about microscopic friction.
Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps
Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus
2016-01-01
Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services. DOI: http://dx.doi.org/10.7554/eLife.16105.001 PMID:27383269
Borghs, Simon; Tomaszewski, Erin L; Halling, Katarina; de la Loge, Christine
2016-10-01
For patients with uncontrolled epilepsy, the severity and postictal sequelae of seizures might be more impactful than their frequency. Seizure severity is often assessed using patient-reported outcome (PRO) instruments; however, evidence of content validity for existing instruments is lacking. Our aim was to understand the real-life experiences of patients with uncontrolled epilepsy. A preliminary conceptual model was developed. The model was refined through (1) a targeted literature review of qualitative research on seizure severity; (2) interviews with four clinical epilepsy experts to evaluate identified concepts; and (3) qualitative interviews with patients with uncontrolled epilepsy, gathering descriptions of symptoms and impacts of epilepsy, focusing on how patients experience and describe "seizure severity." Findings were summarized in a final conceptual model of seizure severity in epilepsy. Twenty-five patients (12 who experienced primary generalized tonic-clonic seizures and 13 who experienced partial-onset seizures) expressed 42 different symptoms and 26 different impacts related to seizures. The final conceptual model contained a wide range of concepts related to seizure frequency, symptoms, and duration. Our model identified several new concepts that characterize the patient experience of seizure severity. A seizure severity PRO instrument should cover a wide range of seizure symptoms alongside frequency and duration of seizures. This qualitative work reinforces the notion that measuring seizure frequency is insufficient and that seizure severity is important in defining the patient's experience of epilepsy. This model could be used to assess the content validity of existing PRO instruments, or could support the development of a new one.
DOT National Transportation Integrated Search
1994-12-01
THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...
The Agency is promulgating an interim final rule to extend the compliance date of the Toxicity Characteristic rule for petroleum refining facilities, marketing terminals and bulk plants engaged in the recovery and remediation operation for 120 days.
ILLINOIS VOCATIONAL EDUCATION OCCUPATIONAL RESEARCH AND DEVELOPMENT COORDINATING UNIT FINAL REPORT.
ERIC Educational Resources Information Center
BURGENER, V.E.
AN OCCUPATIONAL RESEARCH AND DEVELOPMENT UNIT WAS CREATED TO PROVIDE ASSISTANCE IN A STATEWIDE PROGRAM OF VOCATIONAL RESEARCH TO DEVELOP RESEARCH PERSONNEL, TO EVALUATE EXPERIMENTAL CURRICULUM AND INSTRUCTIONAL PROCEDURES, TO DEVELOP AN OVERVIEW OF SURVEY PROCEDURES RELATED TO OCCUPATIONAL OPPORTUNITIES AND TRAINING NEEDS, TO REFINE THE OPERATING…
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... solicitation, selection and negotiation process criteria set forth herein. The Commission is making these... negotiation process criteria set forth herein. The Commission is making these clarifications and refinements... requesting clarification noted in the discussion of specific elements of this final policy statement. \\16...
Refined carbohydrate intake in relation to non-verbal intelligence among Tehrani schoolchildren.
Abargouei, Amin Salehi; Kalantari, Naser; Omidvar, Nasrin; Rashidkhani, Bahram; Rad, Anahita Houshiar; Ebrahimi, Azizeh Afkham; Khosravi-Boroujeni, Hossein; Esmaillzadeh, Ahmad
2012-10-01
Nutrition has long been considered one of the most important environmental factors affecting human intelligence. Although carbohydrates are the most widely studied nutrient for their possible effects on cognition, limited data are available linking usual refined carbohydrate intake and intelligence. The present study was conducted to examine the relationship between long-term refined carbohydrate intake and non-verbal intelligence among schoolchildren. Cross-sectional study. Tehran, Iran. In this cross-sectional study, 245 students aged 6-7 years were selected from 129 elementary schools in two western regions of Tehran. Anthropometric measurements were carried out. Non-verbal intelligence and refined carbohydrate consumption were determined using Raven's Standard Progressive Matrices test and a modified sixty-seven-item FFQ, respectively. Data about potential confounding variables were collected. Linear regression analysis was applied to examine the relationship between non-verbal intelligence scores and refined carbohydrate consumption. Individuals in top tertile of refined carbohydrate intake had lower mean non-verbal intelligence scores in the crude model (P < 0.038). This association remained significant after controlling for age, gender, birth date, birth order and breast-feeding pattern (P = 0.045). However, further adjustments for mother's age, mother's education, father's education, parental occupation and BMI made the association statistically non-significant. We found a significant inverse association between refined carbohydrate consumption and non-verbal intelligence scores in regression models (β = -11.359, P < 0.001). This relationship remained significant in multivariate analysis after controlling for potential confounders (β = -8.495, P = 0.038). The study provides evidence indicating an inverse relationship between refined carbohydrate consumption and non-verbal intelligence among Tehrani children aged 6-7 years. Prospective studies are needed to confirm our findings.
NASA Astrophysics Data System (ADS)
Torkelson, G. Q.; Stoll, R., II
2017-12-01
Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.
NASA Astrophysics Data System (ADS)
Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.
2016-12-01
It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03) experiment, which was held in Oklahoma City and also validate the performance of sVIRSA using scenarios from the FUsing Sensor Integrated Observing Network (FUSION) Field Trial 2007 (FFT07), held at Dugway Proving Grounds in rural Utah.
Enhancement of COPD biological networks using a web-based collaboration interface
Boue, Stephanie; Fields, Brett; Hoeng, Julia; Park, Jennifer; Peitsch, Manuel C.; Schlage, Walter K.; Talikka, Marja; Binenbaum, Ilona; Bondarenko, Vladimir; Bulgakov, Oleg V.; Cherkasova, Vera; Diaz-Diaz, Norberto; Fedorova, Larisa; Guryanova, Svetlana; Guzova, Julia; Igorevna Koroleva, Galina; Kozhemyakina, Elena; Kumar, Rahul; Lavid, Noa; Lu, Qingxian; Menon, Swapna; Ouliel, Yael; Peterson, Samantha C.; Prokhorov, Alexander; Sanders, Edward; Schrier, Sarah; Schwaitzer Neta, Golan; Shvydchenko, Irina; Tallam, Aravind; Villa-Fombuena, Gema; Wu, John; Yudkevich, Ilya; Zelikman, Mariya
2015-01-01
The construction and application of biological network models is an approach that offers a holistic way to understand biological processes involved in disease. Chronic obstructive pulmonary disease (COPD) is a progressive inflammatory disease of the airways for which therapeutic options currently are limited after diagnosis, even in its earliest stage. COPD network models are important tools to better understand the biological components and processes underlying initial disease development. With the increasing amounts of literature that are now available, crowdsourcing approaches offer new forms of collaboration for researchers to review biological findings, which can be applied to the construction and verification of complex biological networks. We report the construction of 50 biological network models relevant to lung biology and early COPD using an integrative systems biology and collaborative crowd-verification approach. By combining traditional literature curation with a data-driven approach that predicts molecular activities from transcriptomics data, we constructed an initial COPD network model set based on a previously published non-diseased lung-relevant model set. The crowd was given the opportunity to enhance and refine the networks on a website ( https://bionet.sbvimprover.com/) and to add mechanistic detail, as well as critically review existing evidence and evidence added by other users, so as to enhance the accuracy of the biological representation of the processes captured in the networks. Finally, scientists and experts in the field discussed and refined the networks during an in-person jamboree meeting. Here, we describe examples of the changes made to three of these networks: Neutrophil Signaling, Macrophage Signaling, and Th1-Th2 Signaling. We describe an innovative approach to biological network construction that combines literature and data mining and a crowdsourcing approach to generate a comprehensive set of COPD-relevant models that can be used to help understand the mechanisms related to lung pathobiology. Registered users of the website can freely browse and download the networks. PMID:25767696
Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less
Enhancement of COPD biological networks using a web-based collaboration interface.
Boue, Stephanie; Fields, Brett; Hoeng, Julia; Park, Jennifer; Peitsch, Manuel C; Schlage, Walter K; Talikka, Marja; Binenbaum, Ilona; Bondarenko, Vladimir; Bulgakov, Oleg V; Cherkasova, Vera; Diaz-Diaz, Norberto; Fedorova, Larisa; Guryanova, Svetlana; Guzova, Julia; Igorevna Koroleva, Galina; Kozhemyakina, Elena; Kumar, Rahul; Lavid, Noa; Lu, Qingxian; Menon, Swapna; Ouliel, Yael; Peterson, Samantha C; Prokhorov, Alexander; Sanders, Edward; Schrier, Sarah; Schwaitzer Neta, Golan; Shvydchenko, Irina; Tallam, Aravind; Villa-Fombuena, Gema; Wu, John; Yudkevich, Ilya; Zelikman, Mariya
2015-01-01
The construction and application of biological network models is an approach that offers a holistic way to understand biological processes involved in disease. Chronic obstructive pulmonary disease (COPD) is a progressive inflammatory disease of the airways for which therapeutic options currently are limited after diagnosis, even in its earliest stage. COPD network models are important tools to better understand the biological components and processes underlying initial disease development. With the increasing amounts of literature that are now available, crowdsourcing approaches offer new forms of collaboration for researchers to review biological findings, which can be applied to the construction and verification of complex biological networks. We report the construction of 50 biological network models relevant to lung biology and early COPD using an integrative systems biology and collaborative crowd-verification approach. By combining traditional literature curation with a data-driven approach that predicts molecular activities from transcriptomics data, we constructed an initial COPD network model set based on a previously published non-diseased lung-relevant model set. The crowd was given the opportunity to enhance and refine the networks on a website ( https://bionet.sbvimprover.com/) and to add mechanistic detail, as well as critically review existing evidence and evidence added by other users, so as to enhance the accuracy of the biological representation of the processes captured in the networks. Finally, scientists and experts in the field discussed and refined the networks during an in-person jamboree meeting. Here, we describe examples of the changes made to three of these networks: Neutrophil Signaling, Macrophage Signaling, and Th1-Th2 Signaling. We describe an innovative approach to biological network construction that combines literature and data mining and a crowdsourcing approach to generate a comprehensive set of COPD-relevant models that can be used to help understand the mechanisms related to lung pathobiology. Registered users of the website can freely browse and download the networks.
Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2017-08-07
This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
NASA Astrophysics Data System (ADS)
Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.
2012-12-01
This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.
NASA Astrophysics Data System (ADS)
Malikova, Yuliya
2005-07-01
Environmental Service-Learning (Env. S-L) appears to show great promise and practitioners tout its benefits, although there have been fewer than ten studies in this emerging area of environmental education. The overall study purpose was to describe the nature, status, and effects of Grade 9--16 Env. S-L programs in Florida, and develop descriptive models of those programs. The purpose of Phase I was to describe these programs and associated partnerships. Based on Phase I results, the purpose of Phase II was to develop, compare, and refine models for less and more established high school programs. This study involved: (1) defining the population of Florida 9--16 Env. S-L programs (Phase I); (2) developing and administering program surveys (Phase I, quantitative); (3) analyzing Phase I survey data and identifications of options for Phase II (Intermediate stage); (4) designing and implementing methodology for further data collection (Phase II, qualitative); (5) refining and finalizing program models (Phase II, descriptive); and (6) summarizing program data, changes, and comparisons. This study revealed that Env. S-L has been practiced in a variety of ways at the high school and college levels in Florida. There, the number of high school programs, and participating teachers and students has been growing. Among others, major program features include block scheduling, indirect S-L activities, external funding sources, and formal and ongoing community partnerships. Findings based on self-reported program assessment results indicate that S-L has had positive effects on students across Furco's S-L outcome domains (i.e., academic achievement/success, school participation/behavior, carrier development, personal development, interpersonal development, ethical/moral development, and development of civic responsibility). Differences existed between less established and more established Env. S-L programs. Less established programs had relatively few participating teachers, courses, projects, community partners, and service sites. Most S-L activities were offered as electives. Lead teachers used reflection to integrate academic learning with service experience to a moderate extent. More established programs had a larger number of participating teachers, courses, projects, community partners, partner representatives, and service sites. Students were consistently engaged in multiple forms of reflection. These teachers also practiced S-L before their exposure to the wider field of S-L.
3-D geoelectrical modelling using finite-difference: a new boundary conditions improvement
NASA Astrophysics Data System (ADS)
Maineult, A.; Schott, J.-J.; Ardiot, A.
2003-04-01
Geoelectrical prospecting is a well-known and frequently used method for quantitative and non-destructive subsurface exploration until depths of a few hundreds metres. Thus archeological objects can be efficiently detected as their resistivities often contrast with those of the surrounding media. Nevertheless using the geoelectrical prospecting method has long been restricted due to inhability to model correctly arbitrarily-shaped structures. The one-dimensional modelling and inversion have long been classical, but are of no interest for the majority of field data, since the natural distribution of resistivity is rarely homogeneous or tabular. Since the 1970's some authors developed discrete methods in order to solve the two and three-dimensional problem, using mathematical tools such as finite-element or finite-difference. The finite-difference approach is quite simple, easily understandable and programmable. Since the work of Dey and Morrison (1979), this approach has become quite popular. Nevertheless, one of its major drawbacks is the difficulty to establish satisfying boundary conditions. Recently Lowry et al. (1989) and Zhao and Yedlin (1996) suggested some refinements on the improvement of the boundary problem. We propose a new betterment, based on the splitting of the potential into two terms, the potential due to a reference tabular medium and a secondary potential caused by a disturbance of this medium. The surface response of a tabular medium has long been known (see for example Koefoed 1979). Here we developed the analytical solution for the electrical tabular potential everywhere in the medium, in order to establish more satisfying boundary conditions. The response of the perturbation, that is to say the object of interest, is then solved using volume-difference and preconditioned conjugate gradient. Finally the grid is refined one or more times in the perturbed domain in order to ameliorate the precision. This method of modelling is easy to implement and numerical computations run very fast. Thanks to improved boundary conditions and refinement processes, edges effects are reduced. Moreover, one important conclusion of this work is the necessity to prefer three-dimensional prospecting, since in some cases a unique profile can lead to misinterpretation, as shown by the comparison of transverse profiles through a buried cylinder and through a buried sphere.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
Salehpour, Mehdi; Behrad, Alireza
2017-10-01
This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
DiffPy-CMI-Python libraries for Complex Modeling Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billinge, Simon; Juhas, Pavol; Farrow, Christopher
2014-02-01
Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.
Glicerina, Virginia; Balestra, Federica; Dalla Rosa, Marco; Bergenhstål, Bjorn; Tornberg, Eva; Romani, Santina
2014-07-01
The effect of different process stages on microstructural and visual properties of dark chocolate was studied. Samples were obtained at each phase of the manufacture process: mixing, prerefining, refining, conching, and tempering. A laser light diffraction technique and environmental scanning electron microscopy (ESEM) were used to study the particle size distribution (PSD) and to analyze modifications in the network structure. Moreover, colorimetric analyses (L*, h°, and C*) were performed on all samples. Each stage influenced in stronger way the microstructural characteristic of products and above all the PSD. Sauter diameter (D [3.2]) decreased from 5.44 μm of mixed chocolate sample to 3.83 μm, of the refined one. ESEM analysis also revealed wide variations in the network structure of samples during the process, with an increase of the aggregation and contact point between particles from mixing to refining stage. Samples obtained from the conching and tempering were characterized by small PS, and a less dense aggregate structure. From color results, samples with the finest particles, having larger specific surface area and the smallest diameter, appeared lighter and more saturated than those with coarse particles. Final quality of food dispersions is affected by network and particles characteristics. The deep knowledge of the influence of single processing stage on chocolate microstructural properties is useful in order to improve or modify final product characteristics. ESEM and laser diffraction are suitable techniques to study changes in chocolate microstructure. © 2014 Institute of Food Technologists®
PREFMD: a web server for protein structure refinement via molecular dynamics simulations.
Heo, Lim; Feig, Michael
2018-03-15
Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Martinec, Zdeněk; Fullea, Javier
2015-03-01
We aim to interpret the vertical gravity and vertical gravity gradient of the GOCE-GRACE combined gravity model over the southeastern part of the Congo basin to refine the published model of sedimentary rock cover. We use the GOCO03S gravity model and evaluate its spherical harmonic representation at or near the Earth's surface. In this case, the gradiometry signals are enhanced as compared to the original measured GOCE gradients at satellite height and better emphasize the spatial pattern of sedimentary geology. To avoid aliasing, the omission error of the modelled gravity induced by the sedimentary rocks is adjusted to that of the GOCO03S gravity model. The mass-density Green's functions derived for the a priori structure of the sediments show a slightly greater sensitivity to the GOCO03S vertical gravity gradient than to the vertical gravity. Hence, the refinement of the sedimentary model is carried out for the vertical gravity gradient over the basin, such that a few anomalous values of the GOCO03S-derived vertical gravity gradient are adjusted by refining the model. We apply the 5-parameter Helmert's transformation, defined by 2 translations, 1 rotation and 2 scale parameters that are searched for by the steepest descent method. The refined sedimentary model is only slightly changed with respect to the original map, but it significantly improves the fit of the vertical gravity and vertical gravity gradient over the basin. However, there are still spatial features in the gravity and gradiometric data that remain unfitted by the refined model. These may be due to lateral density variation that is not contained in the model, a density contrast at the Moho discontinuity, lithospheric density stratifications or mantle convection. In a second step, the refined sedimentary model is used to find the vertical density stratification of sedimentary rocks. Although the gravity data can be interpreted by a constant sedimentary density, such a model does not correspond to the gravitational compaction of sedimentary rocks. Therefore, the density model is extended by including a linear increase in density with depth. Subsequent L2 and L∞ norm minimization procedures are applied to find the density parameters by adjusting both the vertical gravity and the vertical gravity gradient. We found that including the vertical gravity gradient in the interpretation of the GOCO03S-derived data reduces the non-uniqueness of the inverse gradiometric problem for density determination. The density structure of the sedimentary formations that provide the optimum predictions of the GOCO03S-derived gravity and vertical gradient of gravity consists of a surface density contrast with respect to surrounding rocks of 0.24-0.28 g/cm3 and its decrease with depth of 0.05-0.25 g/cm3 per 10 km. Moreover, the case where the sedimentary rocks are gravitationally completely compacted in the deepest parts of the basin is supported by L∞ norm minimization. However, this minimization also allows a remaining density contrast at the deepest parts of the sedimentary basin of about 0.1 g/cm3.
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
A hybrid-system model of the coagulation cascade: simulation, sensitivity, and validation.
Makin, Joseph G; Narayanan, Srini
2013-10-01
The process of human blood clotting involves a complex interaction of continuous-time/continuous-state processes and discrete-event/discrete-state phenomena, where the former comprise the various chemical rate equations and the latter comprise both threshold-limited behaviors and binary states (presence/absence of a chemical). Whereas previous blood-clotting models used only continuous dynamics and perforce addressed only portions of the coagulation cascade, we capture both continuous and discrete aspects by modeling it as a hybrid dynamical system. The model was implemented as a hybrid Petri net, a graphical modeling language that extends ordinary Petri nets to cover continuous quantities and continuous-time flows. The primary focus is simulation: (1) fidelity to the clinical data in terms of clotting-factor concentrations and elapsed time; (2) reproduction of known clotting pathologies; and (3) fine-grained predictions which may be used to refine clinical understanding of blood clotting. Next we examine sensitivity to rate-constant perturbation. Finally, we propose a method for titrating between reliance on the model and on prior clinical knowledge. For simplicity, we confine these last two analyses to a critical purely-continuous subsystem of the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun
This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less
Chiral pathways in DNA dinucleotides using gradient optimized refinement along metastable borders
NASA Astrophysics Data System (ADS)
Romano, Pablo; Guenza, Marina
We present a study of DNA breathing fluctuations using Markov state models (MSM) with our novel refinement procedure. MSM have become a favored method of building kinetic models, however their accuracy has always depended on using a significant number of microstates, making the method costly. We present a method which optimizes macrostates by refining borders with respect to the gradient along the free energy surface. As the separation between macrostates contains highest discretization errors, this method corrects for any errors produced by limited microstate sampling. Using our refined MSM methods, we investigate DNA breathing fluctuations, thermally induced conformational changes in native B-form DNA. Running several microsecond MD simulations of DNA dinucleotides of varying sequences, to include sequence and polarity effects, we've analyzed using our refined MSM to investigate conformational pathways inherent in the unstacking of DNA bases. Our kinetic analysis has shown preferential chirality in unstacking pathways that may be critical in how proteins interact with single stranded regions of DNA. These breathing dynamics can help elucidate the connection between conformational changes and key mechanisms within protein-DNA recognition. NSF Chemistry Division (Theoretical Chemistry), the Division of Physics (Condensed Matter: Material Theory), XSEDE.
Firmin, Ruth L; Lysaker, Paul H; McGrew, John H; Minor, Kyle S; Luther, Lauren; Salyers, Michelle P
2017-12-01
Although associated with key recovery outcomes, stigma resistance remains under-studied largely due to limitations of existing measures. This study developed and validated a new measure of stigma resistance. Preliminary items, derived from qualitative interviews of people with lived experience, were pilot tested online with people self-reporting a mental illness diagnosis (n = 489). Best performing items were selected, and the refined measure was administered to an independent sample of people with mental illness at two state mental health consumer recovery conferences (n = 202). Confirmatory factor analyses (CFA) guided by theory were used to test item fit, correlations between the refined stigma resistance measure and theoretically relevant measures were examined for validity, and test-retest correlations of a subsample were examined for stability. CFA demonstrated strong fit for a 5-factor model. The final 20-item measure demonstrated good internal consistency for each of the 5 subscales, adequate test-retest reliability at 3 weeks, and strong construct validity (i.e., positive associations with quality of life, recovery, and self-efficacy, and negative associations with overall symptoms, defeatist beliefs, and self-stigma). The new measure offers a more reliable and nuanced assessment of stigma resistance. It may afford greater personalization of interventions targeting stigma resistance. Copyright © 2017 Elsevier B.V. All rights reserved.
An editor for the generation and customization of geometry restraints
Moriarty, Nigel W.; Draizen, Eli J.; Adams, Paul D.
2017-02-01
Chemical restraints for use in macromolecular structure refinement are produced by a variety of methods, including a number of programs that use chemical information to generate the required bond, angle, dihedral, chiral and planar restraints. These programs help to automate the process and therefore minimize the errors that could otherwise occur if it were performed manually. Furthermore, restraint-dictionary generation programs can incorporate chemical and other prior knowledge to provide reasonable choices of types and values. However, the use of restraints to define the geometry of a molecule is an approximation introduced with efficiency in mind. The representation of a bondmore » as a parabolic function is a convenience and does not reflect the true variability in even the simplest of molecules. Another complicating factor is the interplay of the molecule with other parts of the macromolecular model. Finally, difficult situations arise from molecules with rare or unusual moieties that may not have their conformational space fully explored. These factors give rise to the need for an interactive editor for WYSIWYG interactions with the restraints and molecule. Restraints Editor, Especially Ligands (REEL) is a graphical user interface for simple and error-free editing along with additional features to provide greater control of the restraint dictionaries in macromolecular refinement.« less
An editor for the generation and customization of geometry restraints.
Moriarty, Nigel W; Draizen, Eli J; Adams, Paul D
2017-02-01
Chemical restraints for use in macromolecular structure refinement are produced by a variety of methods, including a number of programs that use chemical information to generate the required bond, angle, dihedral, chiral and planar restraints. These programs help to automate the process and therefore minimize the errors that could otherwise occur if it were performed manually. Furthermore, restraint-dictionary generation programs can incorporate chemical and other prior knowledge to provide reasonable choices of types and values. However, the use of restraints to define the geometry of a molecule is an approximation introduced with efficiency in mind. The representation of a bond as a parabolic function is a convenience and does not reflect the true variability in even the simplest of molecules. Another complicating factor is the interplay of the molecule with other parts of the macromolecular model. Finally, difficult situations arise from molecules with rare or unusual moieties that may not have their conformational space fully explored. These factors give rise to the need for an interactive editor for WYSIWYG interactions with the restraints and molecule. Restraints Editor, Especially Ligands (REEL) is a graphical user interface for simple and error-free editing along with additional features to provide greater control of the restraint dictionaries in macromolecular refinement.
NASA Astrophysics Data System (ADS)
Apel, M.; Eiken, J.; Hecht, U.
2014-02-01
This paper aims at briefly reviewing phase field models applied to the simulation of heterogeneous nucleation and subsequent growth, with special emphasis on grain refinement by inoculation. The spherical cap and free growth model (e.g. A.L. Greer, et al., Acta Mater. 48, 2823 (2000)) has proven its applicability for different metallic systems, e.g. Al or Mg based alloys, by computing the grain refinement effect achieved by inoculation of the melt with inert seeding particles. However, recent experiments with peritectic Ti-Al-B alloys revealed that the grain refinement by TiB2 is less effective than predicted by the model. Phase field simulations can be applied to validate the approximations of the spherical cap and free growth model, e.g. by computing explicitly the latent heat release associated with different nucleation and growth scenarios. Here, simulation results for point-shaped nucleation, as well as for partially and completely wetted plate-like seed particles will be discussed with respect to recalescence and impact on grain refinement. It will be shown that particularly for large seeding particles (up to 30 μm), the free growth morphology clearly deviates from the assumed spherical cap and the initial growth - until the free growth barrier is reached - significantly contributes to the latent heat release and determines the recalescence temperature.
2005-01-01
Interface Compatibility); the tool is written in Ocaml [10], and the symbolic algorithms for interface compatibility and refinement are built on top...automata for a fire detection and reporting system. be encoded in the input language of the tool TIC. The refinement of sociable interfaces is discussed...are closely related to the I/O Automata Language (IOA) of [11]. Interface models are games between Input and Output, and in the models, it is es
Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio
2012-01-01
Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199
Hsieh, Wen-Ting; Chiang, Been-Huang
2014-07-09
Stimulation of endogenous neurogenesis is a potential approach to compensate for loss of dopaminergic neurons of substantia nigra compacta nigra (SNpc) in patients with Parkinson's disease (PD). This objective was to establish an in vitro model by differentiating pluripotent human embryonic stem cells (hESCs) into midbrain dopaminergic (mDA) neurons for screening phytochemicals with mDA neurogenesis-boosting potentials. Consequently, a five-stage differentiation process was developed. The derived cells expressed many mDA markers including tyrosine hydroxylase (TH), β-III tubulin, and dopamine transporter (DAT). The voltage-gated ion channels and dopamine release were also examined for verifying neuron function, and the dopamine receptor agonists bromocriptine and 7-hydroxy-2-(dipropylamino)tetralin (7-OH-DPAT) were used to validate our model. Then, several potential phytochemicals including green tea catechins and ginsenosides were tested using the model. Finally, ginsenoside Rb1 was identified as the most potent phytochemical which is capable of upregulating neurotrophin expression and inducing mDA differentiation.
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
DPW-VI Results Using FUN3D with Focus on k-kL-MEAH2015 (k-kL) Turbulence Model
NASA Technical Reports Server (NTRS)
Abdol-Hamid, K. S.; Carlson, Jan-Renee; Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Park, Michael A.
2017-01-01
The Common Research Model wing-body configuration is investigated with the k-kL-MEAH2015 turbulence model implemented in FUN3D. This includes results presented at the Sixth Drag Prediction Workshop and additional results generated after the workshop with a nonlinear Quadratic Constitutive Relation (QCR) variant of the same turbulence model. The workshop provided grids are used, and a uniform grid refinement study is performed at the design condition. A large variation between results with and without a reconstruction limiter is exhibited on "medium" grid sizes, indicating that the medium grid size is too coarse for drawing conclusions in comparison with experiment. This variation is reduced with grid refinement. At a fixed angle of attack near design conditions, the QCR variant yielded decreased lift and drag compared with the linear eddy-viscosity model by an amount that was approximately constant with grid refinement. The k-kL-MEAH2015 turbulence model produced wing root junction flow behavior consistent with wind tunnel observations.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... orders would be likely to lead to continuation or recurrence of dumping. \\1\\ With respect to the... off-white, non-toxic, odorless, biodegradable powder, comprising sodium CMC that has been refined and... these sunset reviews because in determining whether revocation of an order would likely lead to...
76 FR 42157 - Small Business Size Standards; Waiver of the Nonmanufacturer Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-18
... AGENCY: U.S. Small Business Administration. ACTION: Notice of Retraction of a Class Waiver SUMMARY: The U... refiners. On August 4, 2009, SBA published a proposed Notice of Retraction of a Waiver from the... Refineries) seeking comments on the proposed retraction of waiver. 74 FR 18584. A final Notice of Retraction...
ERIC Educational Resources Information Center
Gerraughty, James F.; Shanafelt, Michael E.
2005-01-01
This prototype is a continuation of a series of wireless prototypes which began in August 2001 and was reported on again in August 2002. This is the final year of this prototype. This continuation allowed Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA) to refine the existing WLAN for the Saint…
DOT National Transportation Integrated Search
2003-01-01
This final report describes a follow-on study to the previous Crash Avoidance Metrics Partnership (CAMP) human factors work addressing Forward Collision Warning (FCW) timing requirements. This research extends this work by gathering not only "last-se...
Graphic Design Education: A Revised Assessment Approach to Encourage Deep Learning
ERIC Educational Resources Information Center
Ellmers, Grant; Foley, Marius; Bennett, Sue
2008-01-01
In this paper we outline the review and iterative refinement of assessment procedures in a final year graphic design subject at the University of Wollongong. Our aim is to represent the main issues in assessing graphic design work, and informed by the literature, particularly "notions of creativity" (Cowdroy & de Graaff, 2005), to…
ERIC Educational Resources Information Center
Foley, John P., Jr.
A study was conducted to refine and coordinate occupational analysis, job performance aids, and elements of the instructional systems development process for task specific Air Force maintenance training. Techniques for task identification and analysis (TI & A) and data gathering techniques for occupational analysis were related. While TI &…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
...: Steel Concrete Reinforcing Bars From Poland, Indonesia, and Ukraine, 66 FR 8343, 8346 (January 30, 2001) (unchanged in Notice of Final Determinations of Sales at Less Than Fair Value: Steel Concrete Reinforcing..., heat exchangers, refining furnaces and feedwater heaters, whether or not cold drawn; (b) finished...
Exposing Students to the Idea that Theories Can Change
ERIC Educational Resources Information Center
Hoellwarth, Chance; Moelter, Matthew J.
2011-01-01
The scientific method is arguably the most reliable way to understand the physical world, yet this aspect of science is rarely addressed in introductory science courses. Students typically learn about the theory in its final, refined form, and seldom experience the experiment-to-theory cycle that goes into producing the theory. One exception to…
Why undertake a pilot in a qualitative PhD study? Lessons learned to promote success.
Wray, Jane; Archibong, Uduak; Walton, Sean
2017-01-23
Background Pilot studies can play an important role in qualitative studies. Methodological and practical issues can be shaped and refined by undertaking pilots. Personal development and researchers' competence are enhanced and lessons learned can inform the development and quality of the main study. However, pilot studies are rarely published, despite their potential to improve knowledge and understanding of the research. Aim To present the main lessons learned from undertaking a pilot in a qualitative PhD study. Discussion This paper draws together lessons learned when undertaking a pilot as part of a qualitative research project. Important methodological and practical issues identified during the pilot study are discussed including access, recruitment, data collection and the personal development of the researcher. The resulting changes to the final study are also highlighted. Conclusion Sharing experiences of and lessons learned in a pilot study enhances personal development, improves researchers' confidence and competence, and contributes to the understanding of research. Implications for practice Pilots can be used effectively in qualitative studies to refine the final design, and provide the researcher with practical experience to enhance confidence and competence.
Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"
Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...
Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement
NASA Astrophysics Data System (ADS)
Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano
2018-03-01
The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.
Formation and mechanism of nanocrystalline AZ91 powders during HDDR processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yafen; Fan, Jianfeng, E-mail: fanjianfeng@tyu
2017-03-15
Grain sizes of AZ91 alloy powders were markedly refined to about 15 nm from 100 to 160 μm by an optimized hydrogenation-disproportionation-desorption-recombination (HDDR) process. The effect of temperature, hydrogen pressure and processing time on phase and microstructure evolution of AZ91 alloy powders during HDDR process was investigated systematically by X-ray diffraction, optical microscopy, scanning electron microscopy and transmission electron microscopy, respectively. The optimal HDDR process for preparing nanocrystalline Mg alloy powders is hydriding at temperature of 350 °C under 4 MPa hydrogen pressure for 12 h and dehydriding at 350 °C for 3 h in vacuum. A modified unreacted coremore » model was introduced to describe the mechanism of grain refinement of during HDDR process. - Highlights: • Grain size of the AZ91 alloy powders was significantly refined from 100 μm to 15 nm. • The optimal HDDR technology for nano Mg alloy powders is obtained. • A modified unreacted core model of grain refinement mechanism was proposed.« less
Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C
System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less
Rheological properties and bread quality of frozen yeast-dough with added wheat fiber.
Adams, Vivian; Ragaee, Sanaa M; Abdel-Aal, El-Sayed M
2017-01-01
The rheological characteristics of frozen dough are of great importance in bread-making quality. The effect of addition of commercial wheat aleurone and bran on rheological properties and final bread quality of frozen dough was studied. Wheat aleurone (A) and bran (B) containing 240 g kg -1 and 200 g kg -1 arabinoxylan (AX), respectively, were incorporated into refined wheat flour at 150 g kg -1 substitution level (composite A and B, respectively). Dough samples of composite A and B in addition to two reference dough samples, refined flour (ref A) and whole wheat flour (ref B) were stored at -18°C for 9 weeks. Frozen stored composite dough samples contained higher amounts of bound water, less freezable water and exhibited fewer modifications in gluten network during frozen storage based on data from differential scanning calorimetry and nuclear magnetic resonance spectroscopy. Bread made from composite frozen dough had higher loaf volume compared to ref A or ref B throughout the storage period. The incorporation of wheat fiber into refined wheat flour produced dough with minimum alterations in its rheological properties during 9 weeks of frozen storage compared to refined and 100% wheat flour dough samples. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Integrated Process Modeling-A Process Validation Life Cycle Companion.
Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-17
During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.
Refinement, Validation and Benchmarking of a Model for E-Government Service Quality
NASA Astrophysics Data System (ADS)
Magoutas, Babis; Mentzas, Gregoris
This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.
NASA Astrophysics Data System (ADS)
Liu, Ying; Xu, Zhenhuan; Li, Yuguo
2018-04-01
We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.
Velocity Models of the Sedimentary Cover and Acoustic Basement, Central Arctic
NASA Astrophysics Data System (ADS)
Bezumov, D. V.; Butsenko, V.
2017-12-01
As the part of the Russian Federation Application on the Extension of the outer limit of the continental shelf in the Arctic Ocean to the Commission for the limits of the continental shelf the regional 2D seismic reflection and sonobuoy data was obtained in 2011, 2012 and 2014 years. Structure and thickness of the sedimentary cover and acoustic basement of the Central Arctic ocean can be refined due to this data. "VNIIOkeangeologia" created a methodology for matching 2D velocity model of the sedimentary cover based on vertical velocity spectrum calculated from wide-angle reflection sonobuoy data and the results of ray tracing of reflected and refracted waves. Matched 2D velocity models of the sedimentary cover in the Russian part of the Arctic Ocean were computed along several seismic profiles (see Figure). Figure comments: a) vertical velocity spectrum calculated from wide-angle reflection sonobuoy data. RMS velocity curve was picked in accordance with interpreted MCS section. Interval velocities within sedimentary units are shown. Interval velocities from Seiswide model are shown in brackets.b) interpreted sonobuoy record with overlapping of time-distance curves calculated by ray-tracing modelling.c) final depth velocity model specified by means of Seiswide software.
Facet Annotation by Extending CNN with a Matching Strategy.
Wu, Bei; Wei, Bifan; Liu, Jun; Guo, Zhaotong; Zheng, Yuanhao; Chen, Yihe
2018-06-01
Most community question answering (CQA) websites manage plenty of question-answer pairs (QAPs) through topic-based organizations, which may not satisfy users' fine-grained search demands. Facets of topics serve as a powerful tool to navigate, refine, and group the QAPs. In this work, we propose FACM, a model to annotate QAPs with facets by extending convolution neural networks (CNNs) with a matching strategy. First, phrase information is incorporated into text representation by CNNs with different kernel sizes. Then, through a matching strategy among QAPs and facet label texts (FaLTs) acquired from Wikipedia, we generate similarity matrices to deal with the facet heterogeneity. Finally, a three-channel CNN is trained for facet label assignment of QAPs. Experiments on three real-world data sets show that FACM outperforms the state-of-the-art methods.
Global polar geospatial information service retrieval based on search engine and ontology reasoning
Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang
2007-01-01
In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.
Dictionary Indexing of Electron Channeling Patterns.
Singh, Saransh; De Graef, Marc
2017-02-01
The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.
Adjoint tomography of Empirical Green's functions from ambient noise in Southern California
NASA Astrophysics Data System (ADS)
Wang, K.; Liu, Q.; Yang, Y.; Basini, P.; Tape, C.
2017-12-01
We construct a new shear-wave velocity (Vsv) model in Southern California by adjoint tomography of Rayleigh-wave Empirical Green's functions at 5-50 s period from Z-Z component ambient noise cross-correlations. The initial model of our adjoint tomography is the isotropic Vs model M16 from Tape et al. [2010], which is generated by three-component body and surface waves at 2-30 s period from local earthquake data. Synthetic Green's functions (SGFs) from M16 show a good agreement with the Empirical Green's functions (EGFs) from ambient noise at 5-50 s and 10-50 s period bands, but have an average 1.75 s advance in time at 20-50 s. By minimizing the traveltime differences between the EGFs and SGFs using gradient-based algorithm, the initial model is refined and improved and the total misfits is reduced from the initial 1.75s to its convergent point of 0.33 s after five iterations. The final Vsv model fits EGF waveforms better than the initial model at all the three frequency bands with smaller misfit distributions. Our new Vsv model reveals some new features in the mid- and lower-crust, mainly including: (1) the mean speed of lower crust is slowed down by about 5%; (2) In the Los Angeles Basin and its Northern area, the speed is higher than the initial model throughout the crust; (3) beneath the westernmost Peninsular Range Batholith (PRB) and Sierra Nevada Batholith (SNB), we observe high shear velocities in the lower crust; (4) a shallow high-velocity zone in the mid-crust are observed beneath Salton Trough Basin. Our model also shows refined lateral velocity gradient across PRB, SNB, San Andreas Fault (SAF), which helps to understand the west-east compositional boundary in PRB, SNB, and the dip angle and the depth extent of SAF. Our study demonstrates the feasibility of adjoint tomography of ambient noise data in southern California, which is an important complement to earthquake data. The numerical solver used in adjoint tomography can provide more accurate structure sensitivity kernels than analytical methods used in traditional ambient noise tomography.
Scheduler for monitoring objects orbiting earth using satellite-based telescopes
Olivier, Scot S; Pertica, Alexander J; Riot, Vincent J; De Vries, Willem H; Bauman, Brian J; Nikolaev, Sergei; Henderson, John R; Phillion, Donald W
2015-04-28
An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.
Monitoring objects orbiting earth using satellite-based telescopes
Olivier, Scot S.; Pertica, Alexander J.; Riot, Vincent J.; De Vries, Willem H.; Bauman, Brian J.; Nikolaev, Sergei; Henderson, John R.; Phillion, Donald W.
2015-06-30
An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.
NASA Technical Reports Server (NTRS)
Arnold, William R.
2015-01-01
Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.
Structural analysis of glycoproteins: building N-linked glycans with Coot.
Emsley, Paul; Crispin, Max
2018-04-01
Coot is a graphics application that is used to build or manipulate macromolecular models; its particular forte is manipulation of the model at the residue level. The model-building tools of Coot have been combined and extended to assist or automate the building of N-linked glycans. The model is built by the addition of monosaccharides, placed by variation of internal coordinates. The subsequent model is refined by real-space refinement, which is stabilized with modified and additional restraints. It is hoped that these enhanced building tools will help to reduce building errors of N-linked glycans and improve our knowledge of the structures of glycoproteins.
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.
2015-01-01
Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.
Sekar, K.; Rajakannan, V.; Gayathri, D.; Velmurugan, D.; Poi, M.-J.; Dauter, M.; Dauter, Z.; Tsai, M.-D.
2005-01-01
The enzyme phospholipase A2 catalyzes the hydrolysis of the sn-2 acyl chain of phospholipids, forming fatty acids and lysophospholipids. The crystal structure of a triple mutant (K53,56,121M) of bovine pancreatic phospholipase A2 in which the lysine residues at positions 53, 56 and 121 are replaced recombinantly by methionines has been determined at atomic resolution (0.97 Å). The crystal is monoclinic (space group P2), with unit-cell parameters a = 36.934, b = 23.863, c = 65.931 Å, β = 101.47°. The structure was solved by molecular replacement and has been refined to a final R factor of 10.6% (R free = 13.4%) using 63 926 unique reflections. The final protein model consists of 123 amino-acid residues, two calcium ions, one chloride ion, 243 water molecules and six 2-methyl-2,4-pentanediol molecules. The surface-loop residues 60–70 are ordered and have clear electron density. PMID:16508077
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Assessing food allergy risks from residual peanut protein in highly refined vegetable oil.
Blom, W Marty; Kruizinga, Astrid G; Rubingh, Carina M; Remington, Ben C; Crevel, René W R; Houben, Geert F
2017-08-01
Refined vegetable oils including refined peanut oil are widely used in foods. Due to shared production processes, refined non-peanut vegetable oils can contain residual peanut proteins. We estimated the predicted number of allergic reactions to residual peanut proteins using probabilistic risk assessment applied to several scenarios involving food products made with vegetable oils. Variables considered were: a) the estimated production scale of refined peanut oil, b) estimated cross-contact between refined vegetable oils during production, c) the proportion of fat in representative food products and d) the peanut protein concentration in refined peanut oil. For all products examined the predicted risk of objective allergic reactions in peanut-allergic users of the food products was extremely low. The number of predicted reactions ranged depending on the model from a high of 3 per 1000 eating occasions (Weibull) to no reactions (LogNormal). Significantly, all reactions were predicted for allergen intakes well below the amounts reported for the most sensitive individual described in the clinical literature. We conclude that the health risk from cross-contact between vegetable oils and refined peanut oil is negligible. None of the food products would warrant precautionary labelling for peanut according to the VITAL ® programme of the Allergen Bureau. Copyright © 2017 Elsevier Ltd. All rights reserved.
Theory of a refined earth model
NASA Technical Reports Server (NTRS)
Krause, H. G. L.
1968-01-01
Refined equations are derived relating the variations of the earths gravity and radius as functions of longitude and latitude. They particularly relate the oblateness coefficients of the old harmonics and the difference of the polar radii /respectively, ellipticities and polar gravity accelerations/ in the Northern and Southern Hemispheres.
Refining King and Baxter Magolda's Model of Intercultural Maturity
ERIC Educational Resources Information Center
Perez, Rosemary J.; Shim, Woojeong; King, Patricia M.; Baxter Magolda, Marcia B.
2015-01-01
This study examined 110 intercultural experiences from 82 students attending six colleges and universities to explore how students' interpretations of their intercultural experiences reflected their developmental capacities for intercultural maturity. Our analysis of students' experiences confirmed as well as refined and expanded King and Baxter…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristos Aristidou Natureworks); Robert Kean; Tom Schechinger
2007-10-01
The two main objectives of this project were: 1) to develop and test technologies to harvest, transport, store, and separate corn stover to supply a clean raw material to the bioproducts industry, and 2) engineer fermentation systems to meet performance targets for lactic acid and ethanol manufacturers. Significant progress was made in testing methods to harvest corn stover in a “single pass” harvest mode (collect corn grain and stover at the same time). This is technically feasible on small scale, but additional equipment refinements will be needed to facilitate cost effective harvest on a larger scale. Transportation models were developed,more » which indicate that at a corn stover yield of 2.8 tons/acre and purchase price of $35/ton stover, it would be unprofitable to transport stover more than about 25 miles; thus suggesting the development of many regional collection centers. Therefore, collection centers should be located within about 30 miles of the farm, to keep transportation costs to an acceptable level. These collection centers could then potentially do some preprocessing (to fractionate or increase bulk density) and/or ship the biomass by rail or barge to the final customers. Wet storage of stover via ensilage was tested, but no clear economic advantages were evident. Wet storage eliminates fire risk, but increases the complexity of component separation and may result in a small loss of carbohydrate content (fermentation potential). A study of possible supplier-producer relationships, concluded that a “quasi-vertical” integration model would be best suited for new bioproducts industries based on stover. In this model, the relationship would involve a multiyear supply contract (processor with purchase guarantees, producer group with supply guarantees). Price will likely be fixed or calculated based on some formula (possibly a cost plus). Initial quality requirements will be specified (but subject to refinement).Producers would invest in harvest/storage/transportation equipment and the processor would build and operate the plant. Pilot fermentation studies demonstrated dramatic improvements in yields and rates with optimization of batch fermentor parameters. Demonstrated yields and rates are approaching those necessary for profitable commercial operation for production of ethanol or lactic acid. The ability of the biocatalyst to adapt to biomass hydrolysate (both biomass sugars and toxins in the hydrolysate) was demonstrated and points towards ultimate successful commercialization of the technology. However, some of this work will need to be repeated and possibly extended to adapt the final selected biocatalyst for the specific commercial hydrolysate composition. The path from corn stover in the farm field to final products, involves a number of steps. Each of these steps has options, problems, and uncertainties; thus creating a very complex multidimensional obstacle to successful commercial development. Through the tasks of this project, the technical and commercial uncertainties of many of these steps have been addressed; thus providing for a clearer understanding of paths forward and commercial viability of a corn stover-based biorefinery.« less
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.
2016-01-01
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205
Lift-Off: Using Reference Imagery and Freehand Sketching to Create 3D Models in VR.
Jackson, Bret; Keefe, Daniel F
2016-04-01
Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface.
NASA Astrophysics Data System (ADS)
Hansen, K. M.; Christensen, J. H.; Brandt, J.; Frohn, L. M.; Geels, C.
2004-07-01
The Danish Eulerian Hemispheric Model (DEHM) is a 3-D dynamical atmospheric transport model originally developed to describe the atmospheric transport of sulphur into the Arctic. A new version of the model, DEHM-POP, developed to study the atmospheric transport and environmental fate of persistent organic pollutants (POPs) is presented. During environmental cycling, POPs can be deposited and re-emitted several times before reaching a final destination. A description of the exchange processes between the land/ocean surfaces and the atmosphere is included in the model to account for this multi-hop transport. The α-isomer of the pesticide hexachlorocyclohexane (α-HCH) is used as tracer in the model development. The structure of the model and processes included are described in detail. The results from a model simulation showing the atmospheric transport for the years 1991 to 1998 are presented and evaluated against measurements. The annual averaged atmospheric concentration of α-HCH for the 1990s is well described by the model; however, the shorter-term average concentration for most of the stations is not well captured. This indicates that the present simple surface description needs to be refined to get a better description of the air-surface exchange processes of POPs.
NASA Astrophysics Data System (ADS)
Hansen, K. M.; Christensen, J. H.; Brandt, J.; Frohn, L. M.; Geels, C.
2004-03-01
The Danish Eulerian Hemispheric Model (DEHM) is a 3-D dynamical atmospheric transport model originally developed to describe the atmospheric transport of sulphur into the Arctic. A new version of the model, DEHM-POP, developed to study the atmospheric transport and environmental fate of persistent organic pollutants (POPs) is presented. During environmental cycling, POPs can be deposited and re-emitted several times before reaching a final destination. A description of the exchange processes between the land/ocean surfaces and the atmosphere is included in the model to account for this multi-hop transport. The α-isomer of the pesticide hexachlorocyclohexane (α-HCH) is used as tracer in the model development. The structure of the model and processes included are described in detail. The results from a model simulation showing the atmospheric transport for the years 1991 to 1998 are presented and evaluated against measurements. The annual averaged atmospheric concentration of α-HCH for the 1990s is well described by the model; however, the shorter-term average concentration for most of the stations is not well captured. This indicates that the present simple surface description needs to be refined to get a better description of the air-surface exchange proceses of POPs.
Principles of proteome allocation are revealed using proteomic data and genome-scale models
Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...
2016-11-18
Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less
Modeling as an Anchoring Scientific Practice for Explaining Friction Phenomena
NASA Astrophysics Data System (ADS)
Neilson, Drew; Campbell, Todd
2017-12-01
Through examining the day-to-day work of scientists, researchers in science studies have revealed how models are a central sense-making practice of scientists as they construct and critique explanations about how the universe works. Additionally, they allow predictions to be made using the tenets of the model. Given this, alongside research suggesting that engaging students in developing and using models can have a positive effect on learning in science classrooms, the recent national standards documents in science education have identified developing and using models as an important practice students should engage in as they apply and refine their ideas with peers and teachers in explaining phenomena or solving problems in classrooms. This article details how students can be engaged in developing and using models to help them make sense of friction phenomena in a high school conceptual physics classroom in ways that align with visions for teaching and learning outlined in the Next Generation Science Standards. This particular unit has been refined over several years to build on what was initially an inquiry-based unit we have described previously. In this latest iteration of the friction unit, students developed and refined models through engaging in small group and whole class discussions and investigations.
Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics
NASA Technical Reports Server (NTRS)
Seginer, A.; Rusak, Z.; Wasserstrom, E.
1983-01-01
Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.
Image segmentation via foreground and background semantic descriptors
NASA Astrophysics Data System (ADS)
Yuan, Ding; Qiang, Jingjing; Yin, Jihao
2017-09-01
In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.
Rosetta Structure Prediction as a Tool for Solving Difficult Molecular Replacement Problems.
DiMaio, Frank
2017-01-01
Molecular replacement (MR), a method for solving the crystallographic phase problem using phases derived from a model of the target structure, has proven extremely valuable, accounting for the vast majority of structures solved by X-ray crystallography. However, when the resolution of data is low, or the starting model is very dissimilar to the target protein, solving structures via molecular replacement may be very challenging. In recent years, protein structure prediction methodology has emerged as a powerful tool in model building and model refinement for difficult molecular replacement problems. This chapter describes some of the tools available in Rosetta for model building and model refinement specifically geared toward difficult molecular replacement cases.
Modelling dynamics in protein crystal structures by ensemble refinement
Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet
2012-01-01
Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785
Khoury, George A; Smadbeck, James; Kieslich, Chris A; Koskosidis, Alexandra J; Guzman, Yannis A; Tamamis, Phanourios; Floudas, Christodoulos A
2017-06-01
Protein structure refinement is the challenging problem of operating on any protein structure prediction to improve its accuracy with respect to the native structure in a blind fashion. Although many approaches have been developed and tested during the last four CASP experiments, a majority of the methods continue to degrade models rather than improve them. Princeton_TIGRESS (Khoury et al., Proteins 2014;82:794-814) was developed previously and utilizes separate sampling and selection stages involving Monte Carlo and molecular dynamics simulations and classification using an SVM predictor. The initial implementation was shown to consistently refine protein structures 76% of the time in our own internal benchmarking on CASP 7-10 targets. In this work, we improved the sampling and selection stages and tested the method in blind predictions during CASP11. We added a decomposition of physics-based and hybrid energy functions, as well as a coordinate-free representation of the protein structure through distance-binning Cα-Cα distances to capture fine-grained movements. We performed parameter estimation to optimize the adjustable SVM parameters to maximize precision while balancing sensitivity and specificity across all cross-validated data sets, finding enrichment in our ability to select models from the populations of similar decoys generated for targets in CASPs 7-10. The MD stage was enhanced such that larger structures could be further refined. Among refinement methods that are currently implemented as web-servers, Princeton_TIGRESS 2.0 demonstrated the most consistent and most substantial net refinement in blind predictions during CASP11. The enhanced refinement protocol Princeton_TIGRESS 2.0 is freely available as a web server at http://atlas.engr.tamu.edu/refinement/. Proteins 2017; 85:1078-1098. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A Dialogic Inquiry Approach to Working with Teachers in Developing Classroom Dialogue
ERIC Educational Resources Information Center
Hennessy, Sara; Mercer, Neil; Warwick, Paul
2011-01-01
Background/Context: This article describes how we refined an innovative methodology for equitable collaboration between university researchers and classroom practitioners building and refining theory together. The work builds on other coinquiry models in which complementary professional expertise is respected and deliberately exploited in order to…
This paper examines the use of Moderate Resolution Imaging Spectroradiometer (MODIS) observed active fire data (pixel counts) to refine the National Emissions Inventory (NEI) fire emission estimates for major wildfire events. This study was motivated by the extremely limited info...
Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garaizar, F.X.; Trangenstein, J.
1998-09-01
In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less
Implementation of a parallel protein structure alignment service on cloud.
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.
Implementation of a Parallel Protein Structure Alignment Service on Cloud
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
i3Drefine software for protein 3D structure refinement and its assessment in CASP10.
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.
Faundeen, John L.; Hutchison, Vivian
2017-01-01
This paper details how the United States Geological Survey (USGS) Community for Data Integration (CDI) Data Management Working Group developed a Science Data Lifecycle Model, and the role the Model plays in shaping agency-wide policies. Starting with an extensive literature review of existing data Lifecycle models, representatives from various backgrounds in USGS attended a two-day meeting where the basic elements for the Science Data Lifecycle Model were determined. Refinements and reviews spanned two years, leading to finalization of the model and documentation in a formal agency publication . The Model serves as a critical framework for data management policy, instructional resources, and tools. The Model helps the USGS address both the Office of Science and Technology Policy (OSTP) for increased public access to federally funded research, and the Office of Management and Budget (OMB) 2013 Open Data directives, as the foundation for a series of agency policies related to data management planning, metadata development, data release procedures, and the long-term preservation of data. Additionally, the agency website devoted to data management instruction and best practices (www2.usgs.gov/datamanagement) is designed around the Model’s structure and concepts. This paper also illustrates how the Model is being used to develop tools for supporting USGS research and data management processes.
NASA Astrophysics Data System (ADS)
Clegg, R. A.; White, D. M.; Hayhurst, C.; Ridel, W.; Harwick, W.; Hiermaier, S.
2003-09-01
The development and validation of an advanced material model for orthotropic materials, such as fibre reinforced composites, is described. The model is specifically designed to facilitate the numerical simulation of impact and shock wave propagation through orthotropic materials and the prediction of subsequent material damage. Initial development of the model concentrated on correctly representing shock wave propagation in composite materials under high and hypervelocity impact conditions [1]. This work has now been extended to further concentrate on the development of improved numerical models and material characterisation techniques for the prediction of damage, including residual strength, in fibre reinforced composite materials. The work is focussed on Kevlar-epoxy however materials such as CFRP are also being considered. The paper describes our most recent activities in relation to the implementation of advanced material modelling options in this area. These enable refined non-liner directional characteristics of composite materials to be modelled, in addition to the correct thermodynamic response under shock wave loading. The numerical work is backed by an extensive experimental programme covering a wide range of static and dynamic tests to facilitate derivation of model input data and to validate the predicted material response. Finally, the capability of the developing composite material model is discussed in relation to a hypervelocity impact problem.
Gesenhues, Jonas; Hein, Marc; Ketelhut, Maike; Habigt, Moriz; Rüschen, Daniel; Mechelinck, Mare; Albin, Thivaharan; Leonhardt, Steffen; Schmitz-Rode, Thomas; Rossaint, Rolf; Autschbach, Rüdiger; Abel, Dirk
2017-04-01
Computational models of biophysical systems generally constitute an essential component in the realization of smart biomedical technological applications. Typically, the development process of such models is characterized by a great extent of collaboration between different interdisciplinary parties. Furthermore, due to the fact that many underlying mechanisms and the necessary degree of abstraction of biophysical system models are unknown beforehand, the steps of the development process of the application are iteratively repeated when the model is refined. This paper presents some methods and tools to facilitate the development process. First, the principle of object-oriented (OO) modeling is presented and the advantages over classical signal-oriented modeling are emphasized. Second, our self-developed simulation tool ModeliChart is presented. ModeliChart was designed specifically for clinical users and allows independently performing in silico studies in real time including intuitive interaction with the model. Furthermore, ModeliChart is capable of interacting with hardware such as sensors and actuators. Finally, it is presented how optimal control methods in combination with OO models can be used to realize clinically motivated control applications. All methods presented are illustrated on an exemplary clinically oriented use case of the artificial perfusion of the systemic circulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litowitz, L.S.
The unit of instruction was developed and refined in three stages. In the first stage, potential objectives for the unit of instruction were identified from appropriate literature. The objectives were mailed to a group primarily consisting of IA/TE state supervisors for critiquing. Upon receiving responses from the critique group, a revised list of objectives was created. From this revised list a content framework was developed. The content framework was then submitted for critiquing to a second group primarily consisting of energy and power curriculum developers. The responses from the second critique group were summarized, and a refined content framework wasmore » produced. A series of twelve hand-on learning activities for junior high school students were then developed to fulfill the unit objectives and concepts identified in the content framework. These activities were critiqued by a final group of experts representing in-service junior high school IA/TE teachers. Revisions to the unit of instruction were made based on feedback from the final critique group. The unit of instruction was then implemented in a junior high school setting as a final means of obtaining formative evaluation. Results of the formative evaluation indicated that the students who participated in the field test had met many of the unit objectives. Recommendations as to the use of the unit of instruction were made.« less
Data-directed RNA secondary structure prediction using probabilistic modeling
Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon
2016-01-01
Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549
Functional Zonation of the Adult Mammalian Adrenal Cortex
Vinson, Gavin P.
2016-01-01
The standard model of adrenocortical zonation holds that the three main zones, glomerulosa, fasciculata, and reticularis each have a distinct function, producing mineralocorticoids (in fact just aldosterone), glucocorticoids, and androgens respectively. Moreover, each zone has its specific mechanism of regulation, though ACTH has actions throughout. Finally, the cells of the cortex originate from a stem cell population in the outer cortex or capsule, and migrate centripetally, changing their phenotype as they progress through the zones. Recent progress in understanding the development of the gland and the distribution of steroidogenic enzymes, trophic hormone receptors, and other factors suggests that this model needs refinement. Firstly, proliferation can take place throughout the gland, and although the stem cells are certainly located in the periphery, zonal replenishment can take place within zones. Perhaps more importantly, neither the distribution of enzymes nor receptors suggest that the individual zones are necessarily autonomous in their production of steroid. This is particularly true of the glomerulosa, which does not seem to have the full suite of enzymes required for aldosterone biosynthesis. Nor, in the rat anyway, does it express MC2R to account for the response of aldosterone to ACTH. It is known that in development, recruitment of stem cells is stimulated by signals from within the glomerulosa. Furthermore, throughout the cortex local regulatory factors, including cytokines, catecholamines and the tissue renin-angiotensin system, modify and refine the effects of the systemic trophic factors. In these and other ways it more and more appears that the functions of the gland should be viewed as an integrated whole, greater than the sum of its component parts. PMID:27378832
2014-05-01
solver to treat the spray process. An Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with...Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with high fidelity while keeping the cell...in single and multi-hole nozzle configurations. The models were added to the present CONVERGE liquid fuel database and validated extensively
BPS States, Crystals, and Matrices
Sułkowski, Piotr
2011-01-01
We review free fermion, melting crystal, and matrix model representations of wall-crossing phenomena on local, toric Calabi-Yau manifolds. We consider both unrefined and refined BPS counting of closed BPS states involving D2- and D0-branes bound to a D6-brane, as well as open BPS states involving open D2-branes ending on an additional D4-brane. Appropriate limit of these constructions provides, among the others, matrix model representation of refined and unrefined topological string amplitudes.
Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...
2017-06-21
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hao; Liu, Haodong; Lapidus, Saul H.
Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less
Technology, Division of Labor and Alienation From Work. Final Report.
ERIC Educational Resources Information Center
Shepard, Jon M.
This study investigated the theory that a worker's relationship to technology instills in him identifiable attitudes about work. Using samples of office workers from a bank and five insurance companies, and samples of factory workers from the oil refining and automobile industries, a total of 1,888 workers were divided into (1) office and factory…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... likely lead to continuation or recurrence of dumping at the levels listed below in the section entitled... to CMC from Mexico would likely lead to a continuation or recurrence of dumping at the margins... to off-white, non-toxic, odorless, biodegradable powder, comprising sodium CMC that has been refined...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... areas in the energy industry, including coal, oil, natural gas, and nuclear energy, as well as in... higher power ratings. 12. In processing and refining crude oil into petroleum products, oil refineries... energy industry, including coal, oil, natural gas, and nuclear energy, as well as in renewable resources...
The Introduction and Refinement of the Assessment of Digitally Recorded Audio Presentations
ERIC Educational Resources Information Center
Sinclair, Stefanie
2016-01-01
This case study critically evaluates benefits and challenges of a form of assessment included in a final year undergraduate Religious Studies Open University module, which combines a written essay task with a digital audio recording of a short oral presentation. Based on the analysis of student and tutor feedback and sample assignments, this study…