Sample records for refine existing models

  1. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  2. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  3. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  4. Re-refinement from deposited X-ray data can deliver improved models for most PDB entries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joosten, Robbie P.; Womack, Thomas; Vriend, Gert, E-mail: vriend@cmbi.ru.nl

    2009-02-01

    An evaluation of validation and real-space intervention possibilities for improving existing automated (re-)refinement methods. The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation andmore » difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.« less

  5. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-08-18

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.

  6. Hiproofs

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Power, John

    2003-01-01

    We introduce a hierarchical notion of formal proof, useful in the implementation of theorem provers, which we call highproofs. Two alternative definitions are given, motivated by existing notations used in theorem proving research. We define transformations between these two forms of hiproof, develop notions of underlying proof, and give a suitable definition of refinement in order to model incremental proof development. We show that our transformations preserve both underlying proofs and refinement. The relationship of our theory to existing theorem proving systems is discussed, as is its future extension.

  7. Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2010-01-01

    This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.

  8. PDB_REDO: automated re-refinement of X-ray structure models in the PDB.

    PubMed

    Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert

    2009-06-01

    Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.

  9. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  10. Structural Health Monitoring of Large Structures

    NASA Technical Reports Server (NTRS)

    Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.

    1994-01-01

    This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.

  11. Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Morita, Kazuki

    2018-03-01

    We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.

  12. Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Morita, Kazuki

    2018-06-01

    We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.

  13. PDB_REDO: constructive validation, more than just looking for errors.

    PubMed

    Joosten, Robbie P; Joosten, Krista; Murshudov, Garib N; Perrakis, Anastassis

    2012-04-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R(free) and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise `static' structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets.

  14. PDB_REDO: constructive validation, more than just looking for errors

    PubMed Central

    Joosten, Robbie P.; Joosten, Krista; Murshudov, Garib N.; Perrakis, Anastassis

    2012-01-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R free and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise ‘static’ structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets. PMID:22505269

  15. Green Infrastructure Models and Tools

    EPA Science Inventory

    The objective of this project is to modify and refine existing models and develop new tools to support decision making for the complete green infrastructure (GI) project lifecycle, including the planning and implementation of stormwater control in urban and agricultural settings,...

  16. Microstructures and Grain Refinement of Additive-Manufactured Ti- xW Alloys

    NASA Astrophysics Data System (ADS)

    Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian W.; Rolchigo, Matt R.; LeSar, Richard; Collins, Peter C.

    2017-07-01

    It is necessary to better understand the composition-processing-microstructure relationships that exist for materials produced by additive manufacturing. To this end, Laser Engineered Net Shaping (LENS™), a type of additive manufacturing, was used to produce a compositionally graded titanium binary model alloy system (Ti- xW specimen (0 ≤ x ≤ 30 wt pct), so that relationships could be made between composition, processing, and the prior beta grain size. Importantly, the thermophysical properties of the Ti- xW, specifically its supercooling parameter ( P) and growth restriction factor ( Q), are such that grain refinement is expected and was observed. The systematic, combinatorial study of this binary system provides an opportunity to assess the mechanisms by which grain refinement occurs in Ti-based alloys in general, and for additive manufacturing in particular. The operating mechanisms that govern the relationship between composition and grain size are interpreted using a model originally developed for aluminum and magnesium alloys and subsequently applied for titanium alloys. The prior beta grain factor observed and the interpretations of their correlations indicate that tungsten is a good grain refiner and such models are valid to explain the grain-refinement process. By extension, other binary elements or higher order alloy systems with similar thermophysical properties should exhibit similar grain refinement.

  17. Geometric model of pseudo-distance measurement in satellite location systems

    NASA Astrophysics Data System (ADS)

    Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.

    2018-04-01

    The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.

  18. A BRDF statistical model applying to space target materials modeling

    NASA Astrophysics Data System (ADS)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  19. Re-refinement from deposited X-ray data can deliver improved models for most PDB entries.

    PubMed

    Joosten, Robbie P; Womack, Thomas; Vriend, Gert; Bricogne, Gérard

    2009-02-01

    The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation and difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.

  20. Validating neural-network refinements of nuclear mass models

    NASA Astrophysics Data System (ADS)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  1. Implementation of a parallel protein structure alignment service on cloud.

    PubMed

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  2. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    PubMed Central

    Hung, Che-Lun; Lin, Yaw-Ling

    2013-01-01

    Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842

  3. Description of Data Acquisition Efforts

    DOT National Transportation Integrated Search

    1999-09-01

    As part of the overall strategy of refining and improving the existing transportation and air-quality modeling framework, the current project focuses extensively on acquiring disaggregate and reliable data for analysis. In this report, we discuss the...

  4. Stepwise construction of a metabolic network in Event-B: The heat shock response.

    PubMed

    Sanwal, Usman; Petre, Luigia; Petre, Ion

    2017-12-01

    There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Refining the aggregate exposure pathway

    EPA Science Inventory

    Advancements in measurement technologies and modeling capabilities continue to result in an abundance of exposure information, adding to that currently in existence. However, fragmentation within the exposure science community acts as an obstacle for realizing the vision set fort...

  6. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10

    PubMed Central

    Zhang, Yang

    2014-01-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925

  7. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10.

    PubMed

    Zhang, Yang

    2014-02-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.

  8. Community College Presidents' Decision-Making Processes during a Potential Crisis

    ERIC Educational Resources Information Center

    Berry, Judith Kaye

    2013-01-01

    This case study addressed how community college presidents make decisions under conditions that can escalate to full-scale crises. The purpose of this study was to gather data to support the development of alternative models or refinement of existing models for crisis decision making on community college campuses, using an abbreviated…

  9. Using toxicokinetic-toxicodynamic modeling as an acute risk assessment refinement approach in vertebrate ecological risk assessment.

    PubMed

    Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel

    2016-01-01

    Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the "classical" risk assessment approach with the model-based approach. These comparisons showed that TK and TK-TD models can bring more realism to the risk assessment through the possibility to study realistic exposure scenarios and to simulate relevant mechanisms of effects (including delayed toxicity and recovery). Noticeably, using TK-TD models is currently the most relevant way to directly connect realistic exposure patterns to effects. We conclude with recommendations on how to properly use TK and TK-TD model in acute risk assessment for vertebrates. © 2015 SETAC.

  10. Status of growth and yield information for northern forest types

    Treesearch

    Dale S. Solomon

    1977-01-01

    Existing regional growth-and-yield information for most of the northern forest types is summarized by species. Present research is concentrated on growth-simulation models, constructed by either aggregating available information or through individual tree growth studies. A uniformity of more refined measurements is needed so that future growth models can be tried for...

  11. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  12. Refinements in a viscoplastic model

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1989-01-01

    A thermodynamically admissible theory of viscoplasticity with two internal variables (a back stress and a drag strength) is presented. Six material functions characterize a specific viscoplastic model. In the pursuit of compromise between accuracy and simplicity, a model is developed that is a hybrid of two existing viscoplastic models. A limited number of applications of the model to Al, Cu, and Ni are presented. A novel implicit integration method is also discussed. Applications are made to obtain solutions using this viscoplastic model.

  13. mrtailor: a tool for PDB-file preparation for the generation of external restraints.

    PubMed

    Gruene, Tim

    2013-09-01

    Model building starting from, for example, a molecular-replacement solution with low sequence similarity introduces model bias, which can be difficult to detect, especially at low resolution. The program mrtailor removes low-similarity regions from a template PDB file according to sequence similarity between the target sequence and the template sequence and maps the target sequence onto the PDB file. The modified PDB file can be used to generate external restraints for low-resolution refinement with reduced model bias and can be used as a starting point for model building and refinement. The program can call ProSMART [Nicholls et al. (2012), Acta Cryst. D68, 404-417] directly in order to create external restraints suitable for REFMAC5 [Murshudov et al. (2011), Acta Cryst. D67, 355-367]. Both a command-line version and a GUI exist.

  14. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  15. Improving quality of science through better animal welfare: the NC3Rs strategy.

    PubMed

    Prescott, Mark J; Lidster, Katie

    2017-03-22

    Good animal welfare is linked to the quality of research data derived from laboratory animals, their validity as models of human disease, the number of animals required to reach statistical significance and the reproducibility of in vivo studies. Identifying new ways of understanding and improving animal welfare, and promoting these in the scientific community, is therefore a key part of the work of the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs). Our strategy for animal welfare includes funding research to generate an evidence base to support refinements, office-led data sharing to challenge existing practices, events and networks to raise awareness of the evidence base, and the creation of online and other resources to support practical implementation of refinement opportunities.

  16. The Empathy Cycle: Refinement of a Nuclear Concept.

    ERIC Educational Resources Information Center

    Barrett-Lennard, G.T.

    1981-01-01

    Delineates a sequence of distinct stages involved in empathic interaction. There is room for considerable slippage between the inner resonation, communication, and reception levels, and measures based on data from different phases would at best be moderately associated. Principal existing (state) scales are located in reference to the model.…

  17. From translational research to open technology innovation systems.

    PubMed

    Savory, Clive; Fortune, Joyce

    2015-01-01

    The purpose of this paper is to question whether the emphasis placed within translational research on a linear model of innovation provides the most effective model for managing health technology innovation. Several alternative perspectives are presented that have potential to enhance the existing model of translational research. A case study is presented of innovation of a clinical decision support system. The paper concludes from the case study that an extending the triple helix model of technology transfer, to one based on a quadruple helix, present a basis for improving the performance translational research. A case study approach is used to help understand development of an innovative technology within a teaching hospital. The case is then used to develop and refine a model of the health technology innovation system. The paper concludes from the case study that existing models of translational research could be refined further through the development of a quadruple helix model of heath technology innovation that encompasses greater emphasis on user-led and open innovation perspectives. The paper presents several implications for future research based on the need to enhance the model of health technology innovation used to guide policy and practice. The quadruple helix model of innovation that is proposed can potentially guide alterations to the existing model of translational research in the healthcare sector. Several suggestions are made for how innovation activity can be better supported at both a policy and operational level. This paper presents a synthesis of the innovation literature applied to a theoretically important case of open innovation in the UK National Health Service. It draws in perspectives from other industrial sectors and applies them specifically to the management and organisation of innovation activities around health technology and the services in which they are embedded.

  18. Estimation of phosphorus loss from agricultural land in the Heartland region using the APEX model: a first step to evaluating phosphorus indices

    USDA-ARS?s Scientific Manuscript database

    Purpose. Phosphorus (P) indices are a key tool to minimize P loss from agricultural fields but there is insufficient water quality data to fully test them. Our goal is to use the Agricultural Policy/Environmental eXtender Model (APEX), calibrated with existing edge-of-field runoff data, to refine P...

  19. Refinement of the tripartite influence model for men: dual body image pathways to body change behaviors.

    PubMed

    Tylka, Tracy L

    2011-06-01

    Although muscularity and body fat concerns are central to conceptualizing men's body image, they have not been examined together within existing structural models. This study refined the tripartite influence model (Thompson, Heinberg, Altabe, & Tantleff-Dunn, 1999) by including dual body image pathways (muscularity and body fat dissatisfaction) to engagement in muscular enhancement and disordered eating behaviors, respectively, and added dating partners as a source of social influence. Latent variable structural equation modeling analyses supported this quadripartite model in 473 undergraduate men. Nonsignificant paths were trimmed and two unanticipated paths were added. Muscularity dissatisfaction and body fat dissatisfaction represented dual body image pathways to men's engagement in muscularity enhancement behaviors and disordered eating behaviors, respectively. Pressures to be mesomorphic from friends, family, media, and dating partners made unique contributions to the model. Internalization of the mesomorphic ideal, muscularity dissatisfaction, and body fat dissatisfaction played key meditational roles within the model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  1. Evaluating models of healthcare delivery using the Model of Care Evaluation Tool (MCET).

    PubMed

    Hudspeth, Randall S; Vogt, Marjorie; Wysocki, Ken; Pittman, Oralea; Smith, Susan; Cooke, Cindy; Dello Stritto, Rita; Hoyt, Karen Sue; Merritt, T Jeanne

    2016-08-01

    Our aim was to provide the outcome of a structured Model of Care (MoC) Evaluation Tool (MCET), developed by an FAANP Best-practices Workgroup, that can be used to guide the evaluation of existing MoCs being considered for use in clinical practice. Multiple MoCs are available, but deciding which model of health care delivery to use can be confusing. This five-component tool provides a structured assessment approach to model selection and has universal application. A literature review using CINAHL, PubMed, Ovid, and EBSCO was conducted. The MCET evaluation process includes five sequential components with a feedback loop from component 5 back to component 3 for reevaluation of any refinements. The components are as follows: (1) Background, (2) Selection of an MoC, (3) Implementation, (4) Evaluation, and (5) Sustainability and Future Refinement. This practical resource considers an evidence-based approach to use in determining the best model to implement based on need, stakeholder considerations, and feasibility. ©2015 American Association of Nurse Practitioners.

  2. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrisochoides, N.; Sukup, F.

    In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less

  4. Single crystal X-ray diffraction study of the HgBa2CuO4+δ superconducting compound

    NASA Astrophysics Data System (ADS)

    Bordet, P.; Duc, F.; Lefloch, S.; Capponi, J. J.; Alexandre, E.; Rosa-Nunes, M.; Antipov, E. V.; Putilin, S.

    1996-02-01

    A high precision X-ray diffraction analysis up to sin θ/λ = 1.15 of a HgBa2CuO4+δ single crystal having a Tc of ~ 90 K is presented. The cell parameters are a = 3.8815(4), c = 9.485 (7) Å. The refinements indicate the existence of a split barium site due to the presence of excess oxygen in the mercury layer. The position of this excess oxygen might be slightly displaced from the (1/2 1/2 0) position. A 6% mercury deficiency is observed. Models, including mercury defects, substitution by copper cations, or carbonate groups, are compared. However, we obtain no definite evidence for either of the three models. A possible disorder of the Hg position, due to the formation of chemical bonds with neighbouring extra oxygen anions, could correlate to the refinements of mixed species at the Hg site. A low temperature single crystal x-ray diffraction study, and comparison of refinements for the same single crystal with different extra oxygen contents, are in progress to help clarify this problem.

  5. Hospitality Industry Technology Training (HITT). Final Performance Report, April 1, 1989-December 31, 1990.

    ERIC Educational Resources Information Center

    Mount Hood Community Coll., Gresham, OR.

    This final performance report includes a third-party evaluation and a replication guide. The first section describes a project to develop and implement an articulated curriculum for grades 8-14 to prepare young people for entry into hospitality/tourism-related occupations. It discusses the refinement of existing models, pilot test, curriculum…

  6. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  7. Hazards and Possibilities of Optical Breakdown Effects Below the Threshold for Shockwave and Bubble Formation

    DTIC Science & Technology

    2006-07-01

    precision of the determination of Rmax, we established a refined method based on the model of bubble formation described above in section 3.6.1 and the...development can be modeled by hydrodynamic codes based on tabulated equation-of-state data . This has previously demonstrated on ps optical breakdown...per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and

  8. Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species

    NASA Astrophysics Data System (ADS)

    Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar

    2018-02-01

    The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.

  9. Process Time Refinement for Reusable Launch Vehicle Regeneration Modeling

    DTIC Science & Technology

    2008-03-01

    predicted to fail, or have failed. 3) Augmenting existing space systems with redundant or additional capability to enhance space system performance or...Canopies, External Tanks/Pods/Pylon Ejectors , Armament Bay Doors, Missile Launchers, Wing and Fuselage Center Line Racks, Bomb Bay Release...Systems Test 04583 Thrust Maintenance Operation 04584 Silo Door Operation 04650 Initial Build-up-Recovery Vehicle (RV) 147 04610 Nondestructive

  10. Strategy for the management of substance use disorders in the State of Punjab: Developing a structural model of state-level de-addiction services in the health sector (the “Punjab model”)

    PubMed Central

    Basu, Debasish; Avasthi, Ajit

    2015-01-01

    Background: Substance use disorders are believed to have become rampant in the State of Punjab, causing substantive loss to the person, the family, the society, and the state. The situation is likely to worsen further if a structured, government-level, state-wide de-addiction service is not put into place. Aims: The aim was to describe a comprehensive structural model of de-addiction service in the State of Punjab (the “Pyramid model” or “Punjab model”), which is primarily concerned with demand reduction, particularly that part which is concerned with identification, treatment, and aftercare of substance users. Materials and Methods: At the behest of the Punjab Government, this model was developed by the authors after a detailed study of the current scenario, critical and exhaustive look at the existing guidelines, policies, books, web resources, government documents, and the like in this area, a check of the ground reality in terms of existing infrastructural and manpower resources, and keeping pragmatism and practicability in mind. Several rounds of meetings with the government officials and other important stakeholders helped to refine the model further. Results: Our model envisages structural innovation and renovations within the existing state healthcare infrastructure. We formulated a “Pyramid model,” later renamed as “Punjab model,” where there is a broad community base for early identification and outpatient level treatment at the primary care level, both outpatient and inpatient care at the secondary care level, and comprehensive management for more difficult cases at the tertiary care level. A separate de-addiction system for the prisons was also developed. Each of these structural elements was described and refined in details, with the aim of uniform, standardized, and easily accessible care across the state. Conclusions: If the “Punjab model” succeeds, it can provide useful models for other states or even at the national level. PMID:25657452

  11. A senstitivity study of the ground hydrologic model using data generated by an atmospheric general circulation model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sun, S. F.

    1985-01-01

    The Ground Hydrologic Model (GHM) developed for use in an atmospheric general circulation model (GCM) has been refined. A series of sensitivity studies of the new version of the GHM were conducted for the purpose of understanding the role played by various physical parameters in the GHM. The following refinements have been made: (1) the GHM is coupled directly with the planetary boundary layer (PBL); (2) a bulk vegetation layer is added with a more realistic large-scale parameterization; and (3) the infiltration rate is modified. This version GHM has been tested using input data derived from a GCM simulation run for eight North America regions for 45 days. The results are compared with those of the resident GHM in the GCM. The daily average of grid surface temperatures from both models agree reasonably well in phase and magnitude. However, large difference exists in one or two regions on some days. The daily average evapotranspiration is in general 10 to 30% less than the corresponding value given by the resident GHM.

  12. Embodied Agents, E-SQ and Stickiness: Improving Existing Cognitive and Affective Models

    NASA Astrophysics Data System (ADS)

    de Diesbach, Pablo Brice

    This paper synthesizes results from two previous studies of embodied virtual agents on commercial websites. We analyze and criticize the proposed models and discuss the limits of the experimental findings. Results from other important research in the literature are integrated. We also integrate concepts from profound, more business-related, analysis that deepens on the mechanisms of rhetoric in marketing and communication, and the possible role of E-SQ in man-agent interaction. We finally suggest a refined model for the impacts of these agents on web site users, and limits of the improved model are commented.

  13. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  14. Grain-refining heat treatments to improve cryogenic toughness of high-strength steels

    NASA Technical Reports Server (NTRS)

    Rush, H. F.

    1984-01-01

    The development of two high Reynolds number wind tunnels at NASA Langley Research Center which operate at cryogenic temperatures with high dynamic pressures has imposed severe requirements on materials for model construction. Existing commercial high strength steels lack sufficient toughness to permit their safe use at temperatures approaching that of liquid nitrogen (-320 F). Therefore, a program to improve the cryogenic toughness of commercial high strength steels was conducted. Significant improvement in the cryogenic toughness of commercial high strength martensitic and maraging steels was demonstrated through the use of grain refining heat treatments. Charpy impact strength at -320 F was increased by 50 to 180 percent for the various alloys without significant loss in tensile strength. The grain sizes of the 9 percent Ni-Co alloys and 200 grade maraging steels were reduced to 1/10 of the original size or smaller, with the added benefit of improved machinability. This grain refining technique should permit these alloys with ultimate strengths of 220 to 270 ksi to receive consideration for cryogenic service.

  15. On macromolecular refinement at subatomic resolution withinteratomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-09

    A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  16. On macromolecular refinement at subatomic resolution with interatomic scatterers

    PubMed Central

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre

    2007-01-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035

  17. On macromolecular refinement at subatomic resolution with interatomic scatterers.

    PubMed

    Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre

    2007-11-01

    A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.

  18. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  19. K-Means Subject Matter Expert Refined Topic Model Methodology

    DTIC Science & Technology

    2017-01-01

    Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c

  20. Text Extraction from Scene Images by Character Appearance and Structure Modeling

    PubMed Central

    Yi, Chucai; Tian, Yingli

    2012-01-01

    In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111

  1. Implementing technical refinement in high-level athletics: exploring the knowledge schemas of coaches.

    PubMed

    Kearney, Philip E; Carson, Howie J; Collins, Dave

    2018-05-01

    This paper explores the approaches adopted by high-level field athletics coaches when attempting to refine an athlete's already well-established technique (long and triple jump and javelin throwing). Six coaches, who had all coached multiple athletes to multiple major championships, took part in semi-structured interviews focused upon a recent example of technique refinement. Data were analysed using a thematic content analysis. The coaching tools reported were generally consistent with those advised by the existing literature, focusing on attaining "buy-in", utilising part-practice, restoring movement automaticity and securing performance under pressure. Five of the six coaches reported using a systematic sequence of stages to implement the refinement, although the number and content of these stages varied between them. Notably, however, there were no formal sources of knowledge (e.g., coach education or training) provided to inform coaches' decision making. Instead, coaches' decisions were largely based on experience both within and outside the sporting domain. Data offer a useful stimulus for reflection amongst sport practitioners confronted by the problem of technique refinement. Certainly the limited awareness of existing guidelines on technique refinement expressed by the coaches emphasises a need for further collaborative work by researchers and coach educators to disseminate best practice.

  2. Initial Assessment of U.S. Refineries for Purposes of Potential Bio-Based Oil Insertions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Charles J.; Jones, Susanne B.; Padmaperuma, Asanga B.

    2013-04-01

    In order to meet U.S. biofuel objectives over the coming decade the conversion of a broad range of biomass feedstocks, using diverse processing options, will be required. Further, the production of both gasoline and diesel biofuels will employ biomass conversion methods that produce wide boiling range intermediate oils requiring treatment similar to conventional refining processes (i.e. fluid catalytic cracking, hydrocracking, and hydrotreating). As such, it is widely recognized that leveraging existing U.S. petroleum refining infrastructure is key to reducing overall capital demands. This study examines how existing U.S. refining location, capacities and conversion capabilities match in geography and processing capabilitiesmore » with the needs projected from anticipated biofuels production.« less

  3. Structural Validation of Nursing Terminologies

    PubMed Central

    Hardiker, Nicholas R.; Rector, Alan L.

    2001-01-01

    Objective: The purpose of the study is twofold: 1) to explore the applicability of combinatorial terminologies as the basis for building enumerated classifications, and 2) to investigate the usefulness of formal terminological systems for performing such classification and for assisting in the refinement of both combinatorial terminologies and enumerated classifications. Design: A formal model of the beta version of the International Classification for Nursing Practice (ICNP) was constructed in the compositional terminological language GRAIL (GALEN Representation and Integration Language). Terms drawn from the North American Nursing Diagnosis Association Taxonomy I (NANDA taxonomy) were mapped into the model and classified automatically using GALEN technology. Measurements: The resulting generated hierarchy was compared with the NANDA taxonomy to assess coverage and accuracy of classification. Results: In terms of coverage, in this study ICNP was able to capture 77 percent of NANDA terms using concepts drawn from five of its eight axes. Three axes—Body Site, Topology, and Frequency—were not needed. In terms of accuracy, where hierarchic relationships existed in the generated hierarchy or the NANDA taxonomy, or both, 6 were identical, 19 existed in the generated hierarchy alone (2 of these were considered suitable for incorporation into the NANDA taxonomy and 17 were considered inaccurate), and 23 appeared in the NANDA taxonomy alone (8 of these were considered suitable for incorporation into ICNP, 9 were considered inaccurate, and 6 reflected different, equally valid perspectives). Sixty terms appeared at the top level, with no indenting, in both the generated hierarchy and the NANDA taxonomy. Conclusions: With appropriate refinement, combinatorial terminologies such as ICNP have the potential to provide a useful foundation for representing enumerated classifications such as NANDA. Technologies such as GALEN make possible the process of building automatically enumerated classifications while providing a useful means of validating and refining both combinatorial terminologies and enumerated classifications. PMID:11320066

  4. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE PAGES

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.; ...

    2018-06-01

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  5. Real-space refinement in PHENIX for cryo-EM and crystallography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Poon, Billy K.; Read, Randy J.

    This work describes the implementation of real-space refinement in the phenix.real_space_refine program from the PHENIX suite. The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost. Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available. In addition to standard restraints on covalent geometry, phenix.real_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry. The re-refinement ofmore » 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 Å or better shows significant improvement of the models and of the fit of these models to the target maps.« less

  6. Deformable complex network for refining low-resolution X-ray structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu

    2015-10-27

    A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less

  7. On macromolecular refinement at subatomic resolution with interatomic scatterers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2007-11-01

    Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less

  8. Exploring changes in vertical ice extent along the margin of the East Antarctic Ice Sheet in western Dronning Maud Land - initial results of the MAGIC-DML collaboration

    NASA Astrophysics Data System (ADS)

    Lifton, N. A.; Newall, J. C.; Fredin, O.; Glasser, N. F.; Fabel, D.; Rogozhina, I.; Bernales, J.; Prange, M.; Sams, S.; Eisen, O.; Hättestrand, C.; Harbor, J.; Stroeven, A. P.

    2017-12-01

    Numerical ice sheet models constrained by theory and refined by comparisons with observational data are a central component of work to address the interactions between the cryosphere and changing climate, at a wide range of scales. Such models are tested and refined by comparing model predictions of past ice geometries with field-based reconstructions from geological, geomorphological, and ice core data. However, on the East Antarctic Ice sheet, there are few empirical data with which to reconstruct changes in ice sheet geometry in the Dronning Maud Land (DML) region. In addition, there is poor control on the regional climate history of the ice sheet margin, because ice core locations, where detailed reconstructions of climate history exist, are located on high inland domes. This leaves numerical models of regional glaciation history in this near-coastal area largely unconstrained. MAGIC-DML is an ongoing Swedish-US-Norwegian-German-UK collaboration with a focus on improving ice sheet models by combining advances in numerical modeling with filling critical data gaps that exist in our knowledge of the timing and pattern of ice surface changes on the western Dronning Maud Land margin. A combination of geomorphological mapping using remote sensing data, field investigations, cosmogenic nuclide surface exposure dating, and numerical ice-sheet modeling are being used in an iterative manner to produce a comprehensive reconstruction of the glacial history of western Dronning Maud Land. We will present an overview of the project, as well as field observations and preliminary in situ cosmogenic nuclide measurements from the 2016/17 expedition.

  9. Refining the aggregate exposure pathway.

    PubMed

    Tan, Yu-Mei; Leonard, Jeremy A; Edwards, Stephen; Teeguarden, Justin; Egeghy, Peter

    2018-03-01

    Advancements in measurement technologies and modeling capabilities continue to result in an abundance of exposure information, adding to that currently in existence. However, fragmentation within the exposure science community acts as an obstacle for realizing the vision set forth in the National Research Council's report on Exposure Science in the 21 st century to consider exposures from source to dose, on multiple levels of integration, and to multiple stressors. The concept of an Aggregate Exposure Pathway (AEP) was proposed as a framework for organizing and integrating diverse exposure information that exists across numerous repositories and among multiple scientific fields. A workshop held in May 2016 followed introduction of the AEP concept, allowing members of the exposure science community to provide extensive evaluation and feedback regarding the framework's structure, key components, and applications. The current work briefly introduces topics discussed at the workshop and attempts to address key challenges involved in refining this framework. The resulting evolution in the AEP framework's features allows for facilitating acquisition, integration, organization, and transparent application and communication of exposure knowledge in a manner that is independent of its ultimate use, thereby enabling reuse of such information in many applications.

  10. Combining density functional theory (DFT) and pair distribution function (PDF) analysis to solve the structure of metastable materials: the case of metakaolin.

    PubMed

    White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J

    2010-04-07

    Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.

  11. Application of DEN refinement and automated model building to a difficult case of molecular-replacement phasing: the structure of a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum.

    PubMed

    Brunger, Axel T; Das, Debanu; Deacon, Ashley M; Grant, Joanna; Terwilliger, Thomas C; Read, Randy J; Adams, Paul D; Levitt, Michael; Schröder, Gunnar F

    2012-04-01

    Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence.

  12. Application of DEN refinement and automated model building to a difficult case of molecular-replacement phasing: the structure of a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum

    PubMed Central

    Brunger, Axel T.; Das, Debanu; Deacon, Ashley M.; Grant, Joanna; Terwilliger, Thomas C.; Read, Randy J.; Adams, Paul D.; Levitt, Michael; Schröder, Gunnar F.

    2012-01-01

    Phasing by molecular replacement remains difficult for targets that are far from the search model or in situations where the crystal diffracts only weakly or to low resolution. Here, the process of determining and refining the structure of Cgl1109, a putative succinyl-diaminopimelate desuccinylase from Corynebacterium glutamicum, at ∼3 Å resolution is described using a combination of homology modeling with MODELLER, molecular-replacement phasing with Phaser, deformable elastic network (DEN) refinement and automated model building using AutoBuild in a semi-automated fashion, followed by final refinement cycles with phenix.refine and Coot. This difficult molecular-replacement case illustrates the power of including DEN restraints derived from a starting model to guide the movements of the model during refinement. The resulting improved model phases provide better starting points for automated model building and produce more significant difference peaks in anomalous difference Fourier maps to locate anomalous scatterers than does standard refinement. This example also illustrates a current limitation of automated procedures that require manual adjustment of local sequence misalignments between the homology model and the target sequence. PMID:22505259

  13. Systematic approaches to toxicology in the zebrafish.

    PubMed

    Peterson, Randall T; Macrae, Calum A

    2012-01-01

    As the current paradigms of drug discovery evolve, it has become clear that a more comprehensive understanding of the interactions between small molecules and organismal biology will be vital. The zebrafish is emerging as a complement to existing in vitro technologies and established preclinical in vivo models that can be scaled for high-throughput. In this review, we highlight the current status of zebrafish toxicology studies, identify potential future niches for the model in the drug development pipeline, and define the hurdles that must be overcome as zebrafish technologies are refined for systematic toxicology.

  14. A systematic community-based participatory approach to refining an evidence-based community-level intervention: The HOLA intervention for Latino men who have sex with men

    PubMed Central

    Rhodes, Scott D.; Daniel, Jason; Alonzo, Jorge; Duck, Stacy; Garcia, Manuel; Downs, Mario; Hergenrather, Kenneth C.; Alegria-Ortega, Jose; Miller, AAS, Cindy; Boeving Allen, Alex; Gilbert, Paul A.; Marsiglia, Flavio F.

    2014-01-01

    Our community-based participatory research (CBPR) partnership engaged in a multi-step process to refine a culturally congruent intervention that builds on existing community strengths to promote sexual health among immigrant Latino men who have sex with men (MSM). The steps were: (1) increase Latino MSM participation in the existing partnership; (2) establish an Intervention Team; (3) review the existing sexual health literature; (4) explore needs and priorities of Latino MSM; (5) narrow priorities based on what is important and changeable; (6) blend health behavior theory with Latino MSM’s lived experiences; (7) design an intervention conceptual model; (8) develop training modules and (9) resource materials; and (10) pretest and (11) revise the intervention. The developed intervention contains four modules to train Latino MSM to serve as lay health advisors (LHAs) known as “Navegantes”. These modules synthesize locally collected data with other local and national data; blend health behavior theory, the lived experiences, and cultural values of immigrant Latino MSM; and harness the informal social support Latino MSM provide one another. This community-level intervention is designed to meet the expressed sexual health priorities of Latino MSM. It frames disease prevention within sexual health promotion. PMID:23075504

  15. A systematic community-based participatory approach to refining an evidence-based community-level intervention: the HOLA intervention for Latino men who have sex with men.

    PubMed

    Rhodes, Scott D; Daniel, Jason; Alonzo, Jorge; Duck, Stacy; García, Manuel; Downs, Mario; Hergenrather, Kenneth C; Alegría-Ortega, José; Miller, Cindy; Boeving Allen, Alex; Gilbert, Paul A; Marsiglia, Flavio F

    2013-07-01

    Our community-based participatory research partnership engaged in a multistep process to refine a culturally congruent intervention that builds on existing community strengths to promote sexual health among immigrant Latino men who have sex with men (MSM). The steps were the following: (1) increase Latino MSM participation in the existing partnership, (2) establish an Intervention Team, (3) review the existing sexual health literature, (4) explore needs and priorities of Latino MSM, (5) narrow priorities based on what is important and changeable, (6) blend health behavior theory with Latino MSM's lived experiences, (7) design an intervention conceptual model, (8) develop training modules and (9) resource materials, and (10) pretest and (11) revise the intervention. The developed intervention contains four modules to train Latino MSM to serve as lay health advisors known as Navegantes. These modules synthesize locally collected data with other local and national data; blend health behavior theory, the lived experiences, and cultural values of immigrant Latino MSM; and harness the informal social support Latino MSM provide one another. This community-level intervention is designed to meet the expressed sexual health priorities of Latino MSM. It frames disease prevention within sexual health promotion.

  16. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  17. Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, T. N.; Christensen, S.

    2009-12-01

    In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12

  18. Refined Dummy Atom Model of Mg(2+) by Simple Parameter Screening Strategy with Revised Experimental Solvation Free Energy.

    PubMed

    Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei

    2015-12-28

    Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.

  19. Developing a Nuclear Grade of Alloy 617 for Gen IV Nuclear Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Weiju; Swindeman, Robert W; Santella, Michael L

    2010-01-01

    Alloy 617, an attractive material not particularly developed for nuclear use, is now being considered as a leading candidate alloy by several countries for applications in the Gen IV Nuclear Energy Systems. An extensive review of its existing data suggests that it would be beneficial to refine the alloy s specification to a nuclear grade for the intended Gen IV systems. In this paper, rationale for developing a nuclear grade of the alloy is first discussed through an analysis on existing data from various countries. Then initial experiments for refining the alloy specification are described. Preliminary results have suggested themore » feasibility of the refinement approach, as well as the possibility for achieving a desirable nuclear grade. Based on the results, further research activities are recommended.« less

  20. Expanding the landscape of $$ \\mathcal{N} $$ = 2 rank 1 SCFTs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Argyres, Philip C.; Lotito, Matteo; Lu, Yongchao

    Here, we refine our previous proposal [1-3] for systematically classifying 4d rank-1 N = 2 SCFTs by constructing their possible Coulomb branch geometries. Four new recently discussed rank-1 theories [4, 5], including novel N = 3 SCFTs, sit beautifully in our refined classification framework. By arguing for the consistency of their RG flows we can make a strong case for the existence of at least four additional rank-1 SCFTs, nearly doubling the number of known rank-1 SCFTs. The refinement consists of relaxing the assumption that the flavor symmetries of the SCFTs have no discrete factors. This results in an enlargedmore » (but finite) set of possible rank-1 SCFTs. Their existence can be further constrained using consistency of their central charges and RG flows.« less

  1. Expanding the landscape of $$ \\mathcal{N} $$ = 2 rank 1 SCFTs

    DOE PAGES

    Argyres, Philip C.; Lotito, Matteo; Lu, Yongchao; ...

    2016-05-16

    Here, we refine our previous proposal [1-3] for systematically classifying 4d rank-1 N = 2 SCFTs by constructing their possible Coulomb branch geometries. Four new recently discussed rank-1 theories [4, 5], including novel N = 3 SCFTs, sit beautifully in our refined classification framework. By arguing for the consistency of their RG flows we can make a strong case for the existence of at least four additional rank-1 SCFTs, nearly doubling the number of known rank-1 SCFTs. The refinement consists of relaxing the assumption that the flavor symmetries of the SCFTs have no discrete factors. This results in an enlargedmore » (but finite) set of possible rank-1 SCFTs. Their existence can be further constrained using consistency of their central charges and RG flows.« less

  2. Eddy Effects in the General Circulation, Spanning Mean Currents, Mesoscale Eddies, and Topographic Generation, Including Submesoscale Nests

    DTIC Science & Technology

    2014-09-30

    against real-world data in cooperation with William S. Kessler and Hristina Hristova from PMEL (Solomon Sea), and Satoshi Mitarai and Taichi Sakagami from...refined grids, starting with basin-wide eddy permitting resolutions (although substantially finer than that used in climate modeling), and downscaling it...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

  3. CIRSS vertical data integration, San Bernardino study

    NASA Technical Reports Server (NTRS)

    Hodson, W.; Christenson, J.; Michel, R. (Principal Investigator)

    1982-01-01

    The creation and use of a vertically integrated data base, including LANDSAT data, for local planning purposes in a portion of San Bernardino County, California are described. The project illustrates that a vertically integrated approach can benefit local users, can be used to identify and rectify discrepancies in various data sources, and that the LANDSAT component can be effectively used to identify change, perform initial capability/suitability modeling, update existing data, and refine existing data in a geographic information system. Local analyses were developed which produced data of value to planners in the San Bernardino County Planning Department and the San Bernardino National Forest staff.

  4. Use of nutrient self selection as a diet refining tool in Tenebrio molitor (Coleoptera: Tenebrionidae)

    USDA-ARS?s Scientific Manuscript database

    A new method to refine existing dietary supplements for improving production of the yellow mealworm, Tenebrio molitor L. (Coleoptera: Tenebrionidae), was tested. Self selected ratios of 6 dietary ingredients by T. molitor larvae were used to produce a dietary supplement. This supplement was compared...

  5. Improving the accuracy of macromolecular structure refinement at 7 Å resolution.

    PubMed

    Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F

    2012-06-06

    In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Landmark Image Retrieval by Jointing Feature Refinement and Multimodal Classifier Learning.

    PubMed

    Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun; Ma, Shuai; Xiaoming Zhang; Senzhang Wang; Zhoujun Li; Shuai Ma; Ma, Shuai; Zhang, Xiaoming; Wang, Senzhang; Li, Zhoujun

    2018-06-01

    Landmark retrieval is to return a set of images with their landmarks similar to those of the query images. Existing studies on landmark retrieval focus on exploiting the geometries of landmarks for visual similarity matches. However, the visual content of social images is of large diversity in many landmarks, and also some images share common patterns over different landmarks. On the other side, it has been observed that social images usually contain multimodal contents, i.e., visual content and text tags, and each landmark has the unique characteristic of both visual content and text content. Therefore, the approaches based on similarity matching may not be effective in this environment. In this paper, we investigate whether the geographical correlation among the visual content and the text content could be exploited for landmark retrieval. In particular, we propose an effective multimodal landmark classification paradigm to leverage the multimodal contents of social image for landmark retrieval, which integrates feature refinement and landmark classifier with multimodal contents by a joint model. The geo-tagged images are automatically labeled for classifier learning. Visual features are refined based on low rank matrix recovery, and multimodal classification combined with group sparse is learned from the automatically labeled images. Finally, candidate images are ranked by combining classification result and semantic consistence measuring between the visual content and text content. Experiments on real-world datasets demonstrate the superiority of the proposed approach as compared to existing methods.

  7. Refinement and Pattern Formation in Neural Circuits by the Interaction of Traveling Waves with Spike-Timing Dependent Plasticity

    PubMed Central

    Bennett, James E. M.; Bair, Wyeth

    2015-01-01

    Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli. PMID:26308406

  8. Refinement and Pattern Formation in Neural Circuits by the Interaction of Traveling Waves with Spike-Timing Dependent Plasticity.

    PubMed

    Bennett, James E M; Bair, Wyeth

    2015-08-01

    Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli.

  9. An investigation into the vertical axis control power requirements for landing VTOL type aircraft onboard nonaviation ships in various sea states

    NASA Technical Reports Server (NTRS)

    Stevens, M. E.; Roskam, J.

    1985-01-01

    The problem of determining the vertical axis control requirements for landing a VTOL aircraft on a moving ship deck in various sea states is examined. Both a fixed-base piloted simulation and a nonpiloted simulation were used to determine the landing performance as influenced by thrust-to-weight ratio, vertical damping, and engine lags. The piloted simulation was run using a fixed-based simulator at Ames Research center. Simplified versions of an existing AV-8A Harrier model and an existing head-up display format were used. The ship model used was that of a DD963 class destroyer. Simplified linear models of the pilot, aircraft, ship motion, and ship air-wake turbulence were developed for the nonpiloted simulation. A unique aspect of the nonpiloted simulation was the development of a model of the piloting strategy used for shipboard landing. This model was refined during the piloted simulation until it provided a reasonably good representation of observed pilot behavior.

  10. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  11. The Apollo passive seismic experiment

    NASA Technical Reports Server (NTRS)

    Latham, G. V.; Dorman, H. J.; Horvath, P.; Ibrahim, A. K.; Koyama, J.; Nakamura, Y.

    1979-01-01

    The completed data set obtained from the 4-station Apollo seismic network includes signals from approximately 11,800 events of various types. Four data sets for use by other investigators, through the NSSDC, are in preparation. Some refinement of the lunar model based on seismic data can be expected, but its gross features remain as presented two years ago. The existence of a small, molten core remains dependent upon the analysis of signals from a single, far-side impact. Analysis of secondary arrivals from other sources may eventually resolve this issue, as well as continued refinement of the magnetic field measurements. Evidence of considerable lateral heterogeneity within the moon continues to build. The mystery of the much meteoroid flux estimate derived from lunar seismic measurements, as compared with earth-based estimates, remains; although, significant correlations between terrestrial and lunar observations are beginning to emerge.

  12. Computer Models Simulate Fine Particle Dispersion

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  13. Strengthening and Improving Yield Asymmetry of Magnesium Alloys by Second Phase Particle Refinement Under the Guidance of Integrated Computational Materials Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dongsheng; Lavender, Curt

    2015-05-08

    Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less

  14. An adaptive mesh refinement-multiphase lattice Boltzmann flux solver for simulation of complex binary fluid flows

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Wang, Y.; Shu, C.

    2017-12-01

    This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.

  15. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard.

    PubMed

    Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.

  16. 40 CFR 421.75 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (g) Subpart G—Hard Lead Refining Slag Granulation. PSES Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/kkg (pounds per billion pounds) of hard lead produced Lead .000 .000 Zinc .000 .000 (h) Subpart G—Hard Lead Refining Wet Air Pollution Control. PSES Pollutant or...

  17. 40 CFR 421.75 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (g) Subpart G—Hard Lead Refining Slag Granulation. PSES Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/kkg (pounds per billion pounds) of hard lead produced Lead .000 .000 Zinc .000 .000 (h) Subpart G—Hard Lead Refining Wet Air Pollution Control. PSES Pollutant or...

  18. 76 FR 2263 - Approval and Promulgation of Air Quality Implementation Plans; Minnesota; Gopher Resource, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... Gopher Smelting and Refining Company, and the change to Gopher Resource, LLC will be discussed in Section... removed contingency measures from the maintenance plan. On November 19, 2007, MPCA formally withdrew the... Conditions The existing Order refers to the facility as ``Gopher Smelting and Refining Company,'' whereas the...

  19. Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao

    Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less

  20. Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling

    DOE PAGES

    Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao

    2017-10-03

    Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less

  1. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  2. Ethanol toxicokinetics resulting from inhalation exposure in human volunteers and toxicokinetic modeling.

    PubMed

    Dumas-Campagna, Josée; Tardif, Robert; Charest-Tardif, Ginette; Haddad, Sami

    2014-02-01

    Uncertainty exists regarding the validity of a previously developed physiologically-based pharmacokinetic model (PBPK) for inhaled ethanol in humans to predict the blood levels of ethanol (BLE) at low level exposures (<1000 ppm). Thus, the objective of this study is to document the BLE resulting from low levels exposures in order to refine/validate this PBPK model. Human volunteers were exposed to ethanol vapors during 4 h at 5 different concentrations (125-1000 ppm), at rest, in an inhalation chamber. Blood and exhaled air were sampled. Also, the impact of light exercise (50 W) on the BLE was investigated. There is a linear relationship between the ethanol concentrations in inhaled air and (i) BLE (women: r²= 0.98/men: r²= 0.99), as well as (ii) ethanol concentrations in the exhaled air at end of exposure period (men: r²= 0.99/women: r²= 0.99). Furthermore, the exercise resulted in a net and significant increase of BLE (2-3 fold). Overall, the original model predictions overestimated the BLE for all low exposures performed in this study. To properly simulate the toxicokinetic data, the model was refined by adding a description of an extra-hepatic biotransformation of high affinity and low capacity in the richly perfused tissues compartment. This is based on the observation that total clearance observed at low exposure levels was much greater than liver blood flow. The results of this study will facilitate the refinement of the risk assessment associated with chronic inhalation of low levels of ethanol in the general population and especially among workers.

  3. Global classical solvability and stabilization in a two-dimensional chemotaxis-Navier-Stokes system modeling coral fertilization

    NASA Astrophysics Data System (ADS)

    Espejo, Elio; Winkler, Michael

    2018-04-01

    The interplay of chemotaxis, convection and reaction terms is studied in the particular framework of a refined model for coral broadcast spawning, consisting of three equations describing the population densities of unfertilized sperms and eggs and the concentration of a chemical released by the latter, coupled to the incompressible Navier-Stokes equations. Under mild assumptions on the initial data, global existence of classical solutions to an associated initial-boundary value problem in bounded planar domains is established. Moreover, all these solutions are shown to approach a spatially homogeneous equilibrium in the large time limit.

  4. Insights into channel dysfunction from modelling and molecular dynamics simulations.

    PubMed

    Musgaard, Maria; Paramo, Teresa; Domicevica, Laura; Andersen, Ole Juul; Biggin, Philip C

    2018-04-01

    Developments in structural biology mean that the number of different ion channel structures has increased significantly in recent years. Structures of ion channels enable us to rationalize how mutations may lead to channelopathies. However, determining the structures of ion channels is still not trivial, especially as they necessarily exist in many distinct functional states. Therefore, the use of computational modelling can provide complementary information that can refine working hypotheses of both wild type and mutant ion channels. The simplest but still powerful tool is homology modelling. Many structures are available now that can provide suitable templates for many different types of ion channels, allowing a full three-dimensional interpretation of mutational effects. These structural models, and indeed the structures themselves obtained by X-ray crystallography, and more recently cryo-electron microscopy, can be subjected to molecular dynamics simulations, either as a tool to help explore the conformational dynamics in detail or simply as a means to refine the models further. Here we review how these approaches have been used to improve our understanding of how diseases might be linked to specific mutations in ion channel proteins. This article is part of the Special Issue entitled 'Channelopathies.' Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. The Structure of Psychopathology: Toward an Expanded Quantitative Empirical Model

    PubMed Central

    Wright, Aidan G.C.; Krueger, Robert F.; Hobbs, Megan J.; Markon, Kristian E.; Eaton, Nicholas R.; Slade, Tim

    2013-01-01

    There has been substantial recent interest in the development of a quantitative, empirically based model of psychopathology. However, the majority of pertinent research has focused on analyses of diagnoses, as described in current official nosologies. This is a significant limitation because existing diagnostic categories are often heterogeneous. In the current research, we aimed to redress this limitation of the existing literature, and to directly compare the fit of categorical, continuous, and hybrid (i.e., combined categorical and continuous) models of syndromes derived from indicators more fine-grained than diagnoses. We analyzed data from a large representative epidemiologic sample (the 2007 Australian National Survey of Mental Health and Wellbeing; N = 8,841). Continuous models provided the best fit for each syndrome we observed (Distress, Obsessive Compulsivity, Fear, Alcohol Problems, Drug Problems, and Psychotic Experiences). In addition, the best fitting higher-order model of these syndromes grouped them into three broad spectra: Internalizing, Externalizing, and Psychotic Experiences. We discuss these results in terms of future efforts to refine emerging empirically based, dimensional-spectrum model of psychopathology, and to use the model to frame psychopathology research more broadly. PMID:23067258

  6. a Novel Approach to Veterinary Spatial Epidemiology: Dasymetric Refinement of the Swiss Dog Tumor Registry Data

    NASA Astrophysics Data System (ADS)

    Boo, G.; Fabrikant, S. I.; Leyk, S.

    2015-08-01

    In spatial epidemiology, disease incidence and demographic data are commonly summarized within larger regions such as administrative units because of privacy concerns. As a consequence, analyses using these aggregated data are subject to the Modifiable Areal Unit Problem (MAUP) as the geographical manifestation of ecological fallacy. In this study, we create small area disease estimates through dasymetric refinement, and investigate the effects on predictive epidemiological models. We perform a binary dasymetric refinement of municipality-aggregated dog tumor incidence counts in Switzerland for the year 2008 using residential land as a limiting ancillary variable. This refinement is expected to improve the quality of spatial data originally aggregated within arbitrary administrative units by deconstructing them into discontinuous subregions that better reflect the underlying population distribution. To shed light on effects of this refinement, we compare a predictive statistical model that uses unrefined administrative units with one that uses dasymetrically refined spatial units. Model diagnostics and spatial distributions of model residuals are assessed to evaluate the model performances in different regions. In particular, we explore changes in the spatial autocorrelation of the model residuals due to spatial refinement of the enumeration units in a selected mountainous region, where the rugged topography induces great shifts of the analytical units i.e., residential land. Such spatial data quality refinement results in a more realistic estimation of the population distribution within administrative units, and thus, in a more accurate modeling of dog tumor incidence patterns. Our results emphasize the benefits of implementing a dasymetric modeling framework in veterinary spatial epidemiology.

  7. The Use of a Block Diagram Simulation Language for Rapid Model Prototyping

    NASA Technical Reports Server (NTRS)

    Whitlow, Johnathan E.; Engrand, Peter

    1996-01-01

    The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.

  8. Chemical and physical aspects of refining coal liquids

    NASA Astrophysics Data System (ADS)

    Shah, Y. T.; Stiegel, G. J.; Krishnamurthy, S.

    1981-02-01

    Increasing costs and declining reserves of petroleum are forcing oil importing countries to develop alternate energy sources. The direct liquefaction of coal is currently being investigated as a viable means of producing substitute liquid fuels. The coal liquids derived from such processes are typically high in nitrogen, oxygen and sulfur besides having a high aromatic and metals content. It is therefore envisaged that modifications to existing petroleum refining technology will be necessary in order to economically upgrade coal liquids. In this review, compositional data for various coal liquids are presented and compared with those for petroleum fuels. Studies reported on the stability of coal liquids are discussed. The feasibility of processing blends of coal liquids with petroleum feedstocks in existing refineries is evaluated. The chemistry of hydroprocessing is discussed through kinetic and mechanistic studies using compounds which are commonly detected in coal liquids. The pros and cons of using conventional petroleum refining catalysts for upgrading coal liquids are discussed.

  9. Refinement of the probability density function model for preferential concentration of aerosol particles in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Zaichik, Leonid I.; Alipchenkov, Vladimir M.

    2007-11-01

    The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.

  10. Microalgal Metabolic Network Model Refinement through High-Throughput Functional Metabolic Profiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaiboonchoe, Amphun; Dohai, Bushra Saeed; Cai, Hong

    2014-12-10

    Metabolic modeling provides the means to define metabolic processes at a systems level; however, genome-scale metabolic models often remain incomplete in their description of metabolic networks and may include reactions that are experimentally unverified. This shortcoming is exacerbated in reconstructed models of newly isolated algal species, as there may be little to no biochemical evidence available for the metabolism of such isolates. The phenotype microarray (PM) technology (Biolog, Hayward, CA, USA) provides an efficient, high-throughput method to functionally define cellular metabolic activities in response to a large array of entry metabolites. The platform can experimentally verify many of the unverifiedmore » reactions in a network model as well as identify missing or new reactions in the reconstructed metabolic model. The PM technology has been used for metabolic phenotyping of non-photosynthetic bacteria and fungi, but it has not been reported for the phenotyping of microalgae. Here, we introduce the use of PM assays in a systematic way to the study of microalgae, applying it specifically to the green microalgal model species Chlamydomonas reinhardtii. The results obtained in this study validate a number of existing annotated metabolic reactions and identify a number of novel and unexpected metabolites. The obtained information was used to expand and refine the existing COBRA-based C. reinhardtii metabolic network model iRC1080. Over 254 reactions were added to the network, and the effects of these additions on flux distribution within the network are described. The novel reactions include the support of metabolism by a number of d-amino acids, l-dipeptides, and l-tripeptides as nitrogen sources, as well as support of cellular respiration by cysteamine-S-phosphate as a phosphorus source. The protocol developed here can be used as a foundation to functionally profile other microalgae such as known microalgae mutants and novel isolates.« less

  11. Microalgal Metabolic Network Model Refinement through High-Throughput Functional Metabolic Profiling

    PubMed Central

    Chaiboonchoe, Amphun; Dohai, Bushra Saeed; Cai, Hong; Nelson, David R.; Jijakli, Kenan; Salehi-Ashtiani, Kourosh

    2014-01-01

    Metabolic modeling provides the means to define metabolic processes at a systems level; however, genome-scale metabolic models often remain incomplete in their description of metabolic networks and may include reactions that are experimentally unverified. This shortcoming is exacerbated in reconstructed models of newly isolated algal species, as there may be little to no biochemical evidence available for the metabolism of such isolates. The phenotype microarray (PM) technology (Biolog, Hayward, CA, USA) provides an efficient, high-throughput method to functionally define cellular metabolic activities in response to a large array of entry metabolites. The platform can experimentally verify many of the unverified reactions in a network model as well as identify missing or new reactions in the reconstructed metabolic model. The PM technology has been used for metabolic phenotyping of non-photosynthetic bacteria and fungi, but it has not been reported for the phenotyping of microalgae. Here, we introduce the use of PM assays in a systematic way to the study of microalgae, applying it specifically to the green microalgal model species Chlamydomonas reinhardtii. The results obtained in this study validate a number of existing annotated metabolic reactions and identify a number of novel and unexpected metabolites. The obtained information was used to expand and refine the existing COBRA-based C. reinhardtii metabolic network model iRC1080. Over 254 reactions were added to the network, and the effects of these additions on flux distribution within the network are described. The novel reactions include the support of metabolism by a number of d-amino acids, l-dipeptides, and l-tripeptides as nitrogen sources, as well as support of cellular respiration by cysteamine-S-phosphate as a phosphorus source. The protocol developed here can be used as a foundation to functionally profile other microalgae such as known microalgae mutants and novel isolates. PMID:25540776

  12. Microalgal Metabolic Network Model Refinement through High-Throughput Functional Metabolic Profiling.

    PubMed

    Chaiboonchoe, Amphun; Dohai, Bushra Saeed; Cai, Hong; Nelson, David R; Jijakli, Kenan; Salehi-Ashtiani, Kourosh

    2014-01-01

    Metabolic modeling provides the means to define metabolic processes at a systems level; however, genome-scale metabolic models often remain incomplete in their description of metabolic networks and may include reactions that are experimentally unverified. This shortcoming is exacerbated in reconstructed models of newly isolated algal species, as there may be little to no biochemical evidence available for the metabolism of such isolates. The phenotype microarray (PM) technology (Biolog, Hayward, CA, USA) provides an efficient, high-throughput method to functionally define cellular metabolic activities in response to a large array of entry metabolites. The platform can experimentally verify many of the unverified reactions in a network model as well as identify missing or new reactions in the reconstructed metabolic model. The PM technology has been used for metabolic phenotyping of non-photosynthetic bacteria and fungi, but it has not been reported for the phenotyping of microalgae. Here, we introduce the use of PM assays in a systematic way to the study of microalgae, applying it specifically to the green microalgal model species Chlamydomonas reinhardtii. The results obtained in this study validate a number of existing annotated metabolic reactions and identify a number of novel and unexpected metabolites. The obtained information was used to expand and refine the existing COBRA-based C. reinhardtii metabolic network model iRC1080. Over 254 reactions were added to the network, and the effects of these additions on flux distribution within the network are described. The novel reactions include the support of metabolism by a number of d-amino acids, l-dipeptides, and l-tripeptides as nitrogen sources, as well as support of cellular respiration by cysteamine-S-phosphate as a phosphorus source. The protocol developed here can be used as a foundation to functionally profile other microalgae such as known microalgae mutants and novel isolates.

  13. Integration of ALS and TLS for calibration and validation of LAI profiles from large footprint lidar

    NASA Astrophysics Data System (ADS)

    Armston, J.; Tang, H.; Hancock, S.; Hofton, M. A.; Dubayah, R.; Duncanson, L.; Fatoyinbo, T. E.; Blair, J. B.; Disney, M.

    2016-12-01

    The Global Ecosystem Dynamics Investigation (GEDI) is designed to provide measurements of forest vertical structure and above-ground biomass density (AGBD) over tropical and temperate regions. The GEDI is a multi-beam waveform lidar that will acquire transects of forest canopy vertical profiles in conditions of up to 99% canopy cover. These are used to produce a number of canopy height and profile metrics to model habitat suitability and AGBD. These metrics include vertical leaf area index (LAI) profiles, which require some pre-launch refinement of large-footprint waveform processing methods for separating canopy and ground returns and estimation of their reflectance. Previous research developments in modelling canopy gap probability to derive canopy and ground reflectance from waveforms have primarily used data from small-footprint instruments, however development of a generalized spatial model with uncertainty will be useful for interpreting and modelling waveforms from large-footprint instruments such as the NASA Land Vegetation and Ice Sensor (LVIS) with a view to implementation for GEDI. Here we present an analysis of waveform lidar data from the NASA Land Vegetation and Ice Sensor (LVIS), which were acquired in Gabon in February 2016 to support the NASA/ESA AfriSAR campaign. AfriSAR presents a unique opportunity to test refined methods for retrieval of LAI profiles in high above-ground biomass rainforests (up to 600 Mg/ha) with dense canopies (>90% cover), where the greatest uncertainty exists. Airborne and Terrestrial Laser Scanning data (TLS) were also collected, enabling quantification of algorithm performance in plots of dense canopy cover. Refinement of canopy gap probability and LAI profile modelling from large-footprint lidar was based on solving for canopy and ground reflectance parameters spatially by penalized least-squares. The sensitivities of retrieved cover and LAI profiles to variation in canopy and ground reflectance showed improvement compared to assuming a constant ratio. We evaluated the use of spatially proximate simple waveforms to interpret more complex waveforms with poor separation of canopy and ground returns. This work has direct implications for GEDI algorithm refinement.

  14. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard

    PubMed Central

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.

    2008-01-01

    The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468

  15. Towards feasible and effective predictive wavefront control for adaptive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A; Veran, J

    We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.

  16. Development and initial validation of the Pharmacist Frequency of Interprofessional Collaboration Instrument (FICI-P) in primary care.

    PubMed

    Van, Connie; Costa, Daniel; Mitchell, Bernadette; Abbott, Penny; Krass, Ines

    2012-01-01

    Existing validated measures of pharmacist-physician collaboration focus on measuring attitudes toward collaboration and do not measure frequency of collaborative interactions. To develop and validate an instrument to measure the frequency of collaboration between pharmacists and general practitioners (GPs) from the pharmacist's perspective. An 11-item Pharmacist Frequency of Interprofessional Collaboration Instrument (FICI-P) was developed and administered to 586 pharmacists in 8 divisions of general practice in New South Wales, Australia. The initial items were informed by a review of the literature in addition to interviews of pharmacists and GPs. Items were subjected to principal component and Rasch analyses to determine each item's and the overall measure's psychometric properties and for any needed refinements. Two hundred and twenty four (38%) of pharmacist surveys were completed and returned. Principal component analysis suggested removal of 1 item for a final 1-factor solution. The refined 10-item FICI-P demonstrated internal consistency reliability at Cronbach's alpha=0.90. After collapsing the original 5-point response scale to a 4-point response scale, the refined FICI-P demonstrated fit to the Rasch model. Criterion validity of the FICI-P was supported by the correlation of FICI-P scores with scores on a previously validated Physician-Pharmacist Collaboration Instrument. Validity was also supported by predicted differences in FICI-P scores between subgroups of respondents stratified on age, colocation with GPs, and interactions during the intern-training period. The refined 10-item FICI-P was shown to have good internal consistency, criterion validity, and fit to the Rasch model. The creation of such a tool may allow for the measure of impact in the evaluation of interventions designed to improve interprofessional collaboration between GPs and pharmacists. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Computer simulation of refining process of a high consistency disc refiner based on CFD

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Jianwei; Wang, Jiahui

    2017-08-01

    In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.

  18. Template-based structure modeling of protein-protein interactions

    PubMed Central

    Szilagyi, Andras; Zhang, Yang

    2014-01-01

    The structure of protein-protein complexes can be constructed by using the known structure of other protein complexes as a template. The complex structure templates are generally detected either by homology-based sequence alignments or, given the structure of monomer components, by structure-based comparisons. Critical improvements have been made in recent years by utilizing interface recognition and by recombining monomer and complex template libraries. Encouraging progress has also been witnessed in genome-wide applications of template-based modeling, with modeling accuracy comparable to high-throughput experimental data. Nevertheless, bottlenecks exist due to the incompleteness of the proteinprotein complex structure library and the lack of methods for distant homologous template identification and full-length complex structure refinement. PMID:24721449

  19. Refinements in the Los Alamos model of the prompt fission neutron spectrum

    DOE PAGES

    Madland, D. G.; Kahler, A. C.

    2017-01-01

    This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less

  20. Structure refinement of membrane proteins via molecular dynamics simulations.

    PubMed

    Dutagaci, Bercem; Heo, Lim; Feig, Michael

    2018-07-01

    A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.

  1. A Clinical Nurse Leader (CNL) practice development model to support integration of the CNL role into microsystem care delivery.

    PubMed

    Kaack, Lorraine; Bender, Miriam; Finch, Michael; Borns, Linda; Grasham, Katherine; Avolio, Alice; Clausen, Shawna; Terese, Nadine A; Johnstone, Diane; Williams, Marjory

    The Veterans Health Administration (VHA) Office of Nursing Services (ONS) was an early adopter of Clinical Nurse Leader (CNL) practice, generating some of the earliest pilot data of CNL practice effectiveness. In 2011 the VHA ONS CNL Implementation & Evaluation Service (CNL I&E) piloted a curriculum to facilitate CNL transition to effective practice at local VHA settings. In 2015, the CNL I&E and local VHA setting stakeholders collaborated to refine the program, based on lessons learned at the national and local level. The workgroup reviewed the literature to identify theoretical frameworks for CNL practice and practice development. The workgroup selected Benner et al.'s Novice-to-Expert model as the defining framework for CNL practice development, and Bender et al.'s CNL Practice Model as the defining framework for CNL practice integration. The selected frameworks were cross-walked against existing curriculum elements to identify and clarify additional practice development needs. The work generated key insights into: core stages of transition to effective practice; CNL progress and expectations for each stage; and organizational support structures necessary for CNL success at each stage. The refined CNL development model is a robust tool that can be applied to support consistent and effective integration of CNL practice into care delivery. Published by Elsevier Inc.

  2. Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups

    NASA Astrophysics Data System (ADS)

    Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella

    2017-04-01

    A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.

  3. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  4. Conditional Random Field-Based Offline Map Matching for Indoor Environments

    PubMed Central

    Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram

    2016-01-01

    In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm. PMID:27537892

  5. Conditional Random Field-Based Offline Map Matching for Indoor Environments.

    PubMed

    Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram

    2016-08-16

    In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm.

  6. Redefining the Practice of Peer Review Through Intelligent Automation Part 1: Creation of a Standardized Methodology and Referenceable Database.

    PubMed

    Reiner, Bruce I

    2017-10-01

    Conventional peer review practice is compromised by a number of well-documented biases, which in turn limit standard of care analysis, which is fundamental to determination of medical malpractice. In addition to these intrinsic biases, other existing deficiencies exist in current peer review including the lack of standardization, objectivity, retrospective practice, and automation. An alternative model to address these deficiencies would be one which is completely blinded to the peer reviewer, requires independent reporting from both parties, utilizes automated data mining techniques for neutral and objective report analysis, and provides data reconciliation for resolution of finding-specific report differences. If properly implemented, this peer review model could result in creation of a standardized referenceable peer review database which could further assist in customizable education, technology refinement, and implementation of real-time context and user-specific decision support.

  7. Refining Models of L1527-IRS

    NASA Astrophysics Data System (ADS)

    Baker Metzler-Winslow, Elizabeth; Terebey, Susan

    2018-06-01

    This project examines the Class 0/Class 1 protostar L1527-IRS (hereby referred to as L1527) in the interest of creating a more accurate computational model. In a Class 0/Class I protostar like L1527, the envelope is massive, the protostar is growing in mass, and the disk is a small fraction of the protostar mass. Recent work based on ALMA data indicates that L1527, located in the constellation Taurus (about 140 parsecs from Earth), is about ~0.44 solar masses. Existing models were able to fit the spectral energy distribution of L1527 by assuming a puffed-up inner disk. However, the inclusion of the puffed-up disk results in a portion of the disk coinciding with the outflow cavities, a physically unsatisfying arrangement. This project tests models which decrease the size of the disk and increase the density of the outflow cavities (hypothesizing that some dust from the walls of the outflow cavities is swept up into the cavity itself) against existing observational data, and finds that these models fit the data relatively well.

  8. Orthogonal polynomials for refinable linear functionals

    NASA Astrophysics Data System (ADS)

    Laurie, Dirk; de Villiers, Johan

    2006-12-01

    A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.

  9. Investigation of different simulation approaches on a high-head Francis turbine and comparison with model test data: Francis-99

    NASA Astrophysics Data System (ADS)

    Mössinger, Peter; Jester-Zürker, Roland; Jung, Alexander

    2015-01-01

    Numerical investigations of hydraulic turbo machines under steady-state conditions are state of the art in current product development processes. Nevertheless allow increasing computational resources refined discretization methods, more sophisticated turbulence models and therefore better predictions of results as well as the quantification of existing uncertainties. Single stage investigations are done using in-house tools for meshing and set-up procedure. Beside different model domains and a mesh study to reduce mesh dependencies, the variation of several eddy viscosity and Reynolds stress turbulence models are investigated. All obtained results are compared with available model test data. In addition to global values, measured magnitudes in the vaneless space, at runner blade and draft tube positions in term of pressure and velocity are considered. From there it is possible to estimate the influence and relevance of various model domains depending on different operating points and numerical variations. Good agreement can be found for pressure and velocity measurements with all model configurations and, except the BSL-RSM model, all turbulence models. At part load, deviations in hydraulic efficiency are at a large magnitude, whereas at best efficiency and high load operating point efficiencies are close to the measurement. A consideration of the runner side gap geometry as well as a refined mesh is able to improve the results either in relation to hydraulic efficiency or velocity distribution with the drawbacks of less stable numerics and increasing computational time.

  10. Breaking out of the biomed box: an audit assessment and recommendations for an in-house biomedical engineering program.

    PubMed

    Dickey, David M; Jagiela, Steven; Fetters, Dennis

    2003-01-01

    In order to assess the current performance and to identify future growth opportunities of an in-house biomedical engineering (BME) program, senior management of Lehigh Valley Hospital (Allentown, Penn) engaged (in July 2001) the services of a clinical engineering consultant. Although the current in-house program was both functionally and financially sound, an independent audit had not been performed in over 4 years, and there were growing concerns by the BME staff related to the department's future leadership and long-term support from senior management. After an initial 2-month audit of the existing program, the consultant presented 41 separate recommendations for management's consideration. In order to refine and implement these recommendations, 5 separate committees were established to further evaluate a consolidated version of them, with the consultant acting as the facilitator for each group. Outcomes from each of the committees were used in the development of a formal business plan, which, upon full implementation, would not only strengthen and refine the current in-house service model but could also result in a substantial 3-year cost savings for the organization ($1,100,000 from existing operations, $500,000 in cost avoidance by in-sourcing postwarranty support of future capital equipment acquisitions). Another key outcome of the project was related to the development of a new master policy, titled the "Medical Equipment Management Program," complete with a newly defined state-of-the-art equipment scheduled inspection frequency model.

  11. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  12. Epoch Lifetimes in the Dynamics of a Competing Population

    NASA Astrophysics Data System (ADS)

    Yeung, C. H.; Ma, Y. P.; Wong, K. Y. Michael

    We propose a dynamical model of a competing population whose agents have a tendency to balance their decisions in time. The model is applicable to financial markets in which the agents trade with finite capital, or other multiagent systems such as routers in communication networks attempting to transmit multiclass traffic in a fair way. We find an oscillatory behavior due to the segregation of agents into two groups. Each group remains winning over epochs. The aggregation of smart agents is able to explain the lifetime distribution of epochs to 8 decades of probability. The existence of the super agents further refines the lifetime distribution of short epochs.

  13. Geophysics of Martian Periglacial Processes

    NASA Technical Reports Server (NTRS)

    Mellon, Michael T.

    2004-01-01

    Through the examination of small-scale geologic features potentially related to water and ice in the martian subsurface (specifically small-scale polygonal ground and young gully-like features), determine the state, distribution and recent history of subsurface water and ice on Mars. To refine existing models and develop new models of near-surface water and ice, and develop new insights about the nature of water on Mars as manifested by these geologic features. Through an improved understanding of potentially water-related geologic features, utilize these features in addressing questions about where to best search for present day water and what space craft may encounter that might facilitate or inhibit the search for water.

  14. Examining the integrity of measurement of cognitive abilities in the prediction of achievement: Comparisons and contrasts across variables from higher-order and bifactor models.

    PubMed

    Benson, Nicholas F; Kranzler, John H; Floyd, Randy G

    2016-10-01

    Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  15. REFMAC5 for the refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murshudov, Garib N., E-mail: garib@ysbl.york.ac.uk; Skubák, Pavol; Lebedev, Andrey A.

    The general principles behind the macromolecular crystal structure refinement program REFMAC5 are described. This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement toolsmore » such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography.« less

  16. Silicon nanoporous membranes as a rigorous platform for validation of biomolecular transport models

    PubMed Central

    Feinberg, Benjamin J.; Hsiao, Jeff C.; Park, Jaehyun; Zydney, Andrew L.; Fissell, William H.; Roy, Shuvo

    2017-01-01

    Microelectromechanical systems (MEMS), a technology that resulted from significant innovation in semiconductor fabrication, have recently been applied to the development of silicon nanopore membranes (SNM). In contrast to membranes fabricated from polymeric materials, SNM exhibit slit-shaped pores, monodisperse pore size, constant surface porosity, zero pore overlap, and sub-micron thickness. This development in membrane fabrication is applied herein for the validation of the XDLVO (extended Derjaguin, Landau, Verwey, and Overbeek) theory of membrane transport within the context of hemofiltration. In this work, the XDLVO model has been derived for the unique slit pore structure of SNM. Beta-2-microglobulin (B2M), a clinically relevant “middle molecular weight” solute in kidney disease, is highlighted in this study as the solute of interest. In order to determine interaction parameters within the XDLVO model for B2M and SNM, goniometric measurements were conducted, yielding a Hamaker constant of 4.61× 10−21 J and an acid-base Gibbs free energy at contact of 41 mJ/m2. The XDLVO model was combined with existing models for membrane sieving, with predictions of the refined model in good agreement with experimental data. Furthermore, the results show a significant difference between the XDLVO model and the simpler steric predictions typically applied in membrane transport. The refined model can be used as a tool to tailor membrane chemistry and maximize sieving or rejection of different biomolecules. PMID:28936029

  17. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    PubMed

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  18. 40 CFR 409.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 29 2011-07-01 2009-07-01 true Pretreatment standards for existing sources. 409.24 Section 409.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining...

  19. 40 CFR 409.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Pretreatment standards for existing sources. 409.24 Section 409.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining...

  20. 40 CFR 409.34 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 29 2011-07-01 2009-07-01 true Pretreatment standards for existing sources. 409.34 Section 409.34 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining...

  1. 40 CFR 409.34 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Pretreatment standards for existing sources. 409.34 Section 409.34 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining...

  2. 40 CFR 409.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 30 2013-07-01 2012-07-01 true Pretreatment standards for existing sources. 409.24 Section 409.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining...

  3. 40 CFR 409.34 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 29 2014-07-01 2012-07-01 true Pretreatment standards for existing sources. 409.34 Section 409.34 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining...

  4. 40 CFR 409.34 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 30 2013-07-01 2012-07-01 true Pretreatment standards for existing sources. 409.34 Section 409.34 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining...

  5. 40 CFR 409.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 30 2012-07-01 2012-07-01 false Pretreatment standards for existing sources. 409.24 Section 409.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining...

  6. 40 CFR 409.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 29 2014-07-01 2012-07-01 true Pretreatment standards for existing sources. 409.24 Section 409.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Crystalline Cane Sugar Refining...

  7. 40 CFR 409.34 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 30 2012-07-01 2012-07-01 false Pretreatment standards for existing sources. 409.34 Section 409.34 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Liquid Cane Sugar Refining...

  8. Integrating Climate Change Resilience Features into the Incremental Refinement of an Existing Marine Park

    PubMed Central

    Beckley, Lynnath E.; Kobryn, Halina T.; Lombard, Amanda T.; Radford, Ben; Heyward, Andrew

    2016-01-01

    Marine protected area (MPA) designs are likely to require iterative refinement as new knowledge is gained. In particular, there is an increasing need to consider the effects of climate change, especially the ability of ecosystems to resist and/or recover from climate-related disturbances, within the MPA planning process. However, there has been limited research addressing the incorporation of climate change resilience into MPA design. This study used Marxan conservation planning software with fine-scale shallow water (<20 m) bathymetry and habitat maps, models of major benthic communities for deeper water, and comprehensive human use information from Ningaloo Marine Park in Western Australia to identify climate change resilience features to integrate into the incremental refinement of the marine park. The study assessed the representation of benthic habitats within the current marine park zones, identified priority areas of high resilience for inclusion within no-take zones and examined if any iterative refinements to the current no-take zones are necessary. Of the 65 habitat classes, 16 did not meet representation targets within the current no-take zones, most of which were in deeper offshore waters. These deeper areas also demonstrated the highest resilience values and, as such, Marxan outputs suggested minor increases to the current no-take zones in the deeper offshore areas. This work demonstrates that inclusion of fine-scale climate change resilience features within the design process for MPAs is feasible, and can be applied to future marine spatial planning practices globally. PMID:27529820

  9. 75 FR 4963 - Federal Housing Administration (FHA): Hospital Mortgage Insurance Program-Refinancing Hospital Loans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-29

    ... impact on hospitals across the Nation. At a time when the demand for health care services is on the rise... capital to help hospitals refinance debt was sufficiently available, and that the demand for this type of... nursing home, existing assisted living facility, existing intermediate care facility, existing board and...

  10. Experimental and Theoretical Study of Propeller Spinner/Shank Interference. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cornell, C. C.

    1986-01-01

    A fundamental experimental and theoretical investigation into the aerodynamic interference associated with propeller spinner and shank regions was conducted. The research program involved a theoretical assessment of solutions previously proposed, followed by a systematic experimental study to supplement the existing data base. As a result, a refined computational procedure was established for prediction of interference effects in terms of interference drag and resolved into propeller thrust and torque components. These quantities were examined with attention to engineering parameters such as two spinner finess ratios, three blade shank forms, and two/three/four/six/eight blades. Consideration of the physics of the phenomena aided in the logical deduction of two individual interference quantities (cascade effects and spinner/shank juncture interference). These interference effects were semi-empirically modeled using existing theories and placed into a compatible form with an existing propeller performance scheme which provided the basis for examples of application.

  11. Upgrading and Refining of Crude Oils and Petroleum Products by Ionizing Irradiation.

    PubMed

    Zaikin, Yuriy A; Zaikina, Raissa F

    2016-06-01

    A general trend in the oil industry is a decrease in the proven reserves of light crude oils so that any increase in future oil exploration is associated with high-viscous sulfuric oils and bitumen. Although the world reserves of heavy oil are much greater than those of sweet light oils, their exploration at present is less than 12 % of the total oil recovery. One of the main constraints is very high expenses for the existing technologies of heavy oil recovery, upgrading, transportation, and refining. Heavy oil processing by conventional methods is difficult and requires high power inputs and capital investments. Effective and economic processing of high viscous oil and oil residues needs not only improvements of the existing methods, such as thermal, catalytic and hydro-cracking, but the development of new technological approaches for upgrading and refining of any type of problem oil feedstock. One of the perspective approaches to this problem is the application of ionizing irradiation for high-viscous oil processing. Radiation methods for upgrading and refining high-viscous crude oils and petroleum products in a wide temperature range, oil desulfurization, radiation technology for refining used oil products, and a perspective method for gasoline radiation isomerization are discussed in this paper. The advantages of radiation technology are simple configuration of radiation facilities, low capital and operational costs, processing at lowered temperatures and nearly atmospheric pressure without the use of any catalysts, high production rates, relatively low energy consumption, and flexibility to the type of oil feedstock.

  12. Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.

  13. Performance measurement for people with multiple chronic conditions: conceptual model.

    PubMed

    Giovannetti, Erin R; Dy, Sydney; Leff, Bruce; Weston, Christine; Adams, Karen; Valuck, Tom B; Pittman, Aisha T; Blaum, Caroline S; McCann, Barbara A; Boyd, Cynthia M

    2013-10-01

    Improving quality of care for people with multiple chronic conditions (MCCs) requires performance measures reflecting the heterogeneity and scope of their care. Since most existing measures are disease specific, performance measures must be refined and new measures must be developed to address the complexity of care for those with MCCs. To describe development of the Performance Measurement for People with Multiple Chronic Conditions (PM-MCC) conceptual model. Framework development and a national stakeholder panel. We used reviews of existing conceptual frameworks of performance measurement, review of the literature on MCCs, input from experts in the multistakeholder Steering Committee, and public comment. The resulting model centers on the patient and family goals and preferences for care in the context of multiple care sites and providers, the type of care they are receiving, and the national priority domains for healthcare quality measurement. This model organizes measures into a comprehensive framework and identifies areas where measures are lacking. In this context, performance measures can be prioritized and implemented at different levels, in the context of patients' overall healthcare needs.

  14. Refinement of a limit cycle oscillator model of the effects of light on the human circadian pacemaker

    NASA Technical Reports Server (NTRS)

    Jewett, M. E.; Kronauer, R. E.; Brown, E. N. (Principal Investigator)

    1998-01-01

    In 1990, Kronauer proposed a mathematical model of the effects of light on the human circadian pacemaker. Although this model predicted many general features of the response of the human circadian pacemaker to light exposure, additional data now available enable us to refine the original model. We first refined the original model by incorporating the results of a dose response curve to light into the model's predicted relationship between light intensity and the strength of the drive onto the pacemaker. Data from three bright light phase resetting experiments were then used to refine the amplitude recovery characteristics of the model. Finally, the model was tested and further refined using data from an extensive phase resetting experiment in which a 3-cycle bright light stimulus was presented against a background of dim light. In order to describe the results of the four resetting experiments, the following major refinements to the original model were necessary: (i) the relationship between light intensity (I) and drive onto the pacemaker was reduced from I1/3 to I0.23 for light levels between 150 and 10,000 lux; (ii) the van der Pol oscillator from the original model was replaced with a higher-order limit cycle oscillator so that amplitude recovery is slower near the singularity and faster near the limit cycle; (iii) a direct effect of light on circadian period (tau x) was incorporated into the model such that as I increases, tau x decreases, which is in accordance with "Aschoff's rule". This refined model generates the following testable predictions: it should be difficult to enhance normal circadian amplitude via bright light; near the critical point of a type 0 phase response curve (PRC) the slope should be steeper than it is in a type 1 PRC; and circadian period measured during forced desynchrony should be directly affected by ambient light intensity.

  15. 50 CFR 253.11 - Guarantee policy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., except: (1) Vessel construction. The Program will not finance this project cost. The Program will only refinance this project cost for an existing vessel whose previous construction cost has already been financed (or otherwise paid). Refinancing this project cost for a vessel that already exists is not...

  16. 40 CFR 419.35 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 29 2011-07-01 2009-07-01 true Pretreatment standards for existing sources (PSES). 419.35 Section 419.35 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PETROLEUM REFINING POINT SOURCE CATEGORY Petrochemical Subcategory § 419...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England

    The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less

  18. The stock-flow model of spatial data infrastructure development refined by fuzzy logic.

    PubMed

    Abdolmajidi, Ehsan; Harrie, Lars; Mansourian, Ali

    2016-01-01

    The system dynamics technique has been demonstrated to be a proper method by which to model and simulate the development of spatial data infrastructures (SDI). An SDI is a collaborative effort to manage and share spatial data at different political and administrative levels. It is comprised of various dynamically interacting quantitative and qualitative (linguistic) variables. To incorporate linguistic variables and their joint effects in an SDI-development model more effectively, we suggest employing fuzzy logic. Not all fuzzy models are able to model the dynamic behavior of SDIs properly. Therefore, this paper aims to investigate different fuzzy models and their suitability for modeling SDIs. To that end, two inference and two defuzzification methods were used for the fuzzification of the joint effect of two variables in an existing SDI model. The results show that the Average-Average inference and Center of Area defuzzification can better model the dynamics of SDI development.

  19. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  20. Refining the threshold of toxicological concern (TTC) for risk prioritization of trace chemicals in food.

    PubMed

    Felter, Susan; Lane, Richard W; Latulippe, Marie E; Llewellyn, G Craig; Olin, Stephen S; Scimeca, Joseph A; Trautman, Thomas D

    2009-09-01

    Due to ever-improving analytical capabilities, very low levels of unexpected chemicals can now be detected in foods. Although these may be toxicologically insignificant, such incidents often garner significant attention. The threshold of toxicological concern (TTC) methodology provides a scientifically defensible, transparent approach for putting low-level exposures in the context of potential risk, as a tool to facilitate prioritization of responses, including potential mitigation. The TTC method supports the establishment of tiered, health-protective exposure limits for chemicals lacking a full toxicity database, based on evaluation of the known toxicity of chemicals which share similar structural characteristics. The approach supports the view that prudent actions towards public health protection are based on evaluation of safety as opposed to detection chemistry. This paper builds on the existing TTC literature and recommends refinements that address two key areas. The first describes the inclusion of genotoxicity data as a way to refine the TTC limit for chemicals that have structural alerts for genotoxicity. The second area addresses duration of exposure. Whereas the existing TTC exposure limits assume a lifetime of exposure, human exposure to unintended chemicals in food is often only for a limited time. Recommendations are made to refine the approach for less-than-lifetime exposures.

  1. Refining the site conceptual model at a former uranium mill site in Riverton, Wyoming, USA

    DOE PAGES

    Dam, William; Campbell, Sam; Johnson, Ray; ...

    2015-07-07

    Milling activities at a former uranium mill site near Riverton, Wyoming, USA, contaminated the shallow groundwater beneath and downgradient of the site. Although the mill operated for <6 years (1958-1963), its impact remains an environmental liability. Groundwater modeling predicted that contaminant concentrations were declining steadily, which confirmed the conceptual site model (CSM). However, local flooding in 2010 mobilized contaminants that migrated downgradient from the Riverton site and resulted in a dramatic increase in groundwater contaminant concentrations. This observation indicated that the original CSM was inadequate to explain site conditions and needed to be refined. In response to the new observationsmore » after the flood, a collaborative investigation to better understand site conditions and processes commenced. This investigation included installing 103 boreholes to collect soil and groundwater samples, sampling and analysis of evaporite minerals along the bank of the Little Wind River, an analysis of evaportranspiration in the shallow aquifer, and sampling naturally organic-rich sediments near groundwater discharge areas. The enhanced characterization revealed that the existing CSM did not account for high uranium concentrations in groundwater remaining on the former mill site and groundwater plume stagnation near the Little Wind River. Observations from the flood and subsequent investigations indicate that additional characterization is still needed to continue refining the CSM and determine the viability of the natural flushing compliance strategy. Additional sampling, analysis, and testing of soil and groundwater are necessary to investigate secondary contaminant sources, mobilization of contaminants during floods, geochemical processes, contaminant plume stagnation, distribution of evaporite minerals and organic-rich sediments, and mechanisms and rates of contaminant transfer from soil to groundwater. Future data collection will be used to continually revise the CSM and evaluate the compliance strategy at the site.« less

  2. Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method

    PubMed Central

    Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan

    2012-01-01

    The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550

  3. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  4. Hydrodynamic Modeling of Air Blast Propagation from the Humble Redwood Chemical High Explosive Detonations Using GEODYN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, V D

    Two-dimensional axisymmetric hydrodynamic models were developed using GEODYN to simulate the propagation of air blasts resulting from a series of high explosive detonations conducted at Kirtland Air Force Base in August and September of 2007. Dubbed Humble Redwood I (HR-1), these near-surface chemical high explosive detonations consisted of seven shots of varying height or depth of burst. Each shot was simulated numerically using GEODYN. An adaptive mesh refinement scheme based on air pressure gradients was employed such that the mesh refinement tracked the advancing shock front where sharp discontinuities existed in the state variables, but allowed the mesh to sufficientlymore » relax behind the shock front for runtime efficiency. Comparisons of overpressure, sound speed, and positive phase impulse from the GEODYN simulations were made to the recorded data taken from each HR-1 shot. Where the detonations occurred above ground or were shallowly buried (no deeper than 1 m), the GEODYN model was able to simulate the sound speeds, peak overpressures, and positive phase impulses to within approximately 1%, 23%, and 6%, respectively, of the actual recorded data, supporting the use of numerical simulation of the air blast as a forensic tool in determining the yield of an otherwise unknown explosion.« less

  5. MODFLOW-2005, The U.S. Geological Survey Modular Ground-Water Model - Documentation of the Multiple-Refined-Areas Capability of Local Grid Refinement (LGR) and the Boundary Flow and Head (BFH) Package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2007-01-01

    This report documents the addition of the multiple-refined-areas capability to shared node Local Grid Refinement (LGR) and Boundary Flow and Head (BFH) Package of MODFLOW-2005, the U.S. Geological Survey modular, three-dimensional, finite-difference ground-water flow model. LGR now provides the capability to simulate ground-water flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. The ability to have multiple, nonoverlapping areas of refinement is important in situations where there is more than one area of concern within a regional model. In this circumstance, LGR can be used to simulate these distinct areas with higher resolution grids. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. The BFH Package can be used to simulate these situations by using either the parent or child models independently.

  6. 40 CFR 419.25 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Pretreatment standards for existing sources (PSES). 419.25 Section 419.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PETROLEUM REFINING POINT SOURCE CATEGORY Cracking Subcategory § 419.25...

  7. 40 CFR 419.25 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 29 2011-07-01 2009-07-01 true Pretreatment standards for existing sources (PSES). 419.25 Section 419.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS PETROLEUM REFINING POINT SOURCE CATEGORY Cracking Subcategory § 419.25...

  8. A Theoretical Framework for the Associations between Identity and Psychopathology

    ERIC Educational Resources Information Center

    Klimstra, Theo A.; Denissen, Jaap J. A.

    2017-01-01

    Identity research largely emerged from clinical observations. Decades of empirical work advanced the field in refining existing approaches and adding new approaches. Furthermore, the existence of linkages of identity with psychopathology is now well established. Unfortunately, both the directionality of effects between identity aspects and…

  9. Refining a self-assessment of informatics competency scale using Mokken scaling analysis.

    PubMed

    Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne

    2015-01-01

    Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions.

  10. Refinement of protein termini in template-based modeling using conformational space annealing.

    PubMed

    Park, Hahnbeom; Ko, Junsu; Joo, Keehyoung; Lee, Julian; Seok, Chaok; Lee, Jooyoung

    2011-09-01

    The rapid increase in the number of experimentally determined protein structures in recent years enables us to obtain more reliable protein tertiary structure models than ever by template-based modeling. However, refinement of template-based models beyond the limit available from the best templates is still needed for understanding protein function in atomic detail. In this work, we develop a new method for protein terminus modeling that can be applied to refinement of models with unreliable terminus structures. The energy function for terminus modeling consists of both physics-based and knowledge-based potential terms with carefully optimized relative weights. Effective sampling of both the framework and terminus is performed using the conformational space annealing technique. This method has been tested on a set of termini derived from a nonredundant structure database and two sets of termini from the CASP8 targets. The performance of the terminus modeling method is significantly improved over our previous method that does not employ terminus refinement. It is also comparable or superior to the best server methods tested in CASP8. The success of the current approach suggests that similar strategy may be applied to other types of refinement problems such as loop modeling or secondary structure rearrangement. Copyright © 2011 Wiley-Liss, Inc.

  11. Variability of Protein Structure Models from Electron Microscopy.

    PubMed

    Monroe, Lyman; Terashi, Genki; Kihara, Daisuke

    2017-04-04

    An increasing number of biomolecular structures are solved by electron microscopy (EM). However, the quality of structure models determined from EM maps vary substantially. To understand to what extent structure models are supported by information embedded in EM maps, we used two computational structure refinement methods to examine how much structures can be refined using a dataset of 49 maps with accompanying structure models. The extent of structure modification as well as the disagreement between refinement models produced by the two computational methods scaled inversely with the global and the local map resolutions. A general quantitative estimation of deviations of structures for particular map resolutions are provided. Our results indicate that the observed discrepancy between the deposited map and the refined models is due to the lack of structural information present in EM maps and thus these annotations must be used with caution for further applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. X-ray structure determination at low resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunger, Axel T., E-mail: brunger@stanford.edu; Department of Molecular and Cellular Physiology, Stanford University; Department of Neurology and Neurological Sciences, Stanford University

    2009-02-01

    Refinement is meaningful even at 4 Å or lower, but with present methodologies it should start from high-resolution crystal structures whenever possible. As an example of structure determination in the 3.5–4.5 Å resolution range, crystal structures of the ATPase p97/VCP, consisting of an N-terminal domain followed by a tandem pair of ATPase domains (D1 and D2), are discussed. The structures were originally solved by molecular replacement with the high-resolution structure of the N-D1 fragment of p97/VCP, whereas the D2 domain was manually built using its homology to the D1 domain as a guide. The structure of the D2 domain alonemore » was subsequently solved at 3 Å resolution. The refined model of D2 and the high-resolution structure of the N-D1 fragment were then used as starting models for re-refinement against the low-resolution diffraction data for full-length p97. The re-refined full-length models showed significant improvement in both secondary structure and R values. The free R values dropped by as much as 5% compared with the original structure refinements, indicating that refinement is meaningful at low resolution and that there is information in the diffraction data even at ∼4 Å resolution that objectively assesses the quality of the model. It is concluded that de novo model building is problematic at low resolution and refinement should start from high-resolution crystal structures whenever possible.« less

  13. Active vibration attenuating seat suspension for an armored helicopter crew seat

    NASA Astrophysics Data System (ADS)

    Sztein, Pablo Javier

    An Active Vibration Attenuating Seat Suspension (AVASS) for an MH-60S helicopter crew seat is designed to protect the occupants from harmful whole-body vibration (WBV). Magnetorheological (MR) suspension units are designed, fabricated and installed in a helicopter crew seat. These MR isolators are built to work in series with existing Variable Load Energy Absorbers (VLEAs), have minimal increase in weight, and maintain crashworthiness for the seat system. Refinements are discussed, based on testing, to minimize friction observed in the system. These refinements include the addition of roller bearings to replace friction bearings in the existing seat. Additionally, semi-active control of the MR dampers is achieved using special purpose built custom electronics integrated into the seat system. Experimental testing shows that an MH-60S retrofitted with AVASS provides up to 70.65% more vibration attenuation than the existing seat configuration as well as up to 81.1% reduction in vibration from the floor.

  14. Applying an Empirical Hydropathic Forcefield in Refinement May Improve Low-Resolution Protein X-Ray Crystal Structures

    PubMed Central

    Koparde, Vishal N.; Scarsdale, J. Neel; Kellogg, Glen E.

    2011-01-01

    Background The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery. Methodology An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology. Results The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. PMID:21246043

  15. Preliminary Experiments for the Assessment of V/W-band Links for Space-Earth Communications

    NASA Technical Reports Server (NTRS)

    Nessel, James A.; Acosta, Roberto J.; Miranda, Felix A.

    2013-01-01

    Since September 2012, NASA Glenn Research Center has deployed a microwave profiling radiometer at White Sands, NM, to estimate atmospheric propagation effects on communications links in the V and W bands (71-86GHz). Estimates of attenuation statistics in the millimeter wave due to gaseous and cloud components of the atmosphere show good agreement with current ITU-R models, but fail to predict link performance in the presence of moderate to heavy rain rates, due to the inherent limitations of passive radiometry. Herein, we discuss the preliminary results of these measurements and describe a design for a terrestrial link experiment to validate/refine existing rain attenuation models in the V/Wbands.

  16. Overview of refinement procedures within REFMAC5: utilizing data from different sources.

    PubMed

    Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N

    2018-03-01

    Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.

  17. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  18. Carbohydrate structure: the rocky road to automation.

    PubMed

    Agirre, Jon; Davies, Gideon J; Wilson, Keith S; Cowtan, Kevin D

    2017-06-01

    With the introduction of intuitive graphical software, structural biologists who are not experts in crystallography are now able to build complete protein or nucleic acid models rapidly. In contrast, carbohydrates are in a wholly different situation: scant automation exists, with manual building attempts being sometimes toppled by incorrect dictionaries or refinement problems. Sugars are the most stereochemically complex family of biomolecules and, as pyranose rings, have clear conformational preferences. Despite this, all refinement programs may produce high-energy conformations at medium to low resolution, without any support from the electron density. This problem renders the affected structures unusable in glyco-chemical terms. Bringing structural glycobiology up to 'protein standards' will require a total overhaul of the methodology. Time is of the essence, as the community is steadily increasing the production rate of glycoproteins, and electron cryo-microscopy has just started to image them in precisely that resolution range where crystallographic methods falter most. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  20. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Segmenting the thoracic, abdominal and pelvic musculature on CT scans combining atlas-based model and active contour model

    NASA Astrophysics Data System (ADS)

    Zhang, Weidong; Liu, Jiamin; Yao, Jianhua; Summers, Ronald M.

    2013-03-01

    Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis (CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%+/-3.5%, and that of false positives is 5.5%+/-4.2%.

  2. Defining the requisite knowledge for providers of in-service professional development for K--12 teachers of science: Refining the construct

    NASA Astrophysics Data System (ADS)

    Tucker, Deborah L.

    Purpose. The purpose of this grounded theory study was to refine, using a Delphi study process, the four categories of the theoretical model of the comprehensive knowledge base required by providers of professional development for K-12 teachers of science generated from a review of the literature. Methodology. This grounded theory study used data collected through a modified Delphi technique and interviews to refine and validate the literature-based knowledge base required by providers of professional development for K-12 teachers of science. Twenty-three participants, experts in the fields of science education, how people learn, instructional and assessment strategies, and learning contexts, responded to the study's questions. Findings. By "densifying" the four categories of the knowledge base, this study determined the causal conditions (the science subject matter knowledge), the intervening conditions (how people learn), the strategies (the effective instructional and assessment strategies), and the context (the context and culture of formal learning environments) surrounding the science professional development process. Eight sections were added to the literature-based knowledge base; the final model comprised of forty-nine sections. The average length of the operational definitions increased nearly threefold and the number of citations per operational definition increased more than twofold. Conclusions. A four-category comprehensive model that can serve as the foundation for the knowledge base required by science professional developers now exists. Subject matter knowledge includes science concepts, inquiry, the nature of science, and scientific habits of mind; how people learn includes the principles of learning, active learning, andragogy, variations in learners, neuroscience and cognitive science, and change theory; effective instructional and assessment strategies include constructivist learning and inquiry-based teaching, differentiation of instruction, making knowledge and thinking accessible to learners, automatic and fluent retrieval of nonscience-specific skills, and science assessment and assessment strategies, science-specific instructional strategies, and safety within a learning environment; and, contextual knowledge includes curriculum selection and implementation strategies and knowledge of building program coherence. Recommendations. Further research on the use of which specific instructional strategies identified in the refined knowledge base have positive, significant effect sizes for adult learners is recommended.

  3. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.

  4. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  5. Production of a national 1:1,000,000-scale hydrography dataset for the United States: feature selection, simplification, and refinement

    USGS Publications Warehouse

    Gary, Robin H.; Wilson, Zachary D.; Archuleta, Christy-Ann M.; Thompson, Florence E.; Vrabel, Joseph

    2009-01-01

    During 2006-09, the U.S. Geological Survey, in cooperation with the National Atlas of the United States, produced a 1:1,000,000-scale (1:1M) hydrography dataset comprising streams and waterbodies for the entire United States, including Puerto Rico and the U.S. Virgin Islands, for inclusion in the recompiled National Atlas. This report documents the methods used to select, simplify, and refine features in the 1:100,000-scale (1:100K) (1:63,360-scale in Alaska) National Hydrography Dataset to create the national 1:1M hydrography dataset. Custom tools and semi-automated processes were created to facilitate generalization of the 1:100K National Hydrography Dataset (1:63,360-scale in Alaska) to 1:1M on the basis of existing small-scale hydrography datasets. The first step in creating the new 1:1M dataset was to address feature selection and optimal data density in the streams network. Several existing methods were evaluated. The production method that was established for selecting features for inclusion in the 1:1M dataset uses a combination of the existing attributes and network in the National Hydrography Dataset and several of the concepts from the methods evaluated. The process for creating the 1:1M waterbodies dataset required a similar approach to that used for the streams dataset. Geometric simplification of features was the next step. Stream reaches and waterbodies indicated in the feature selection process were exported as new feature classes and then simplified using a geographic information system tool. The final step was refinement of the 1:1M streams and waterbodies. Refinement was done through the use of additional geographic information system tools.

  6. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  7. Incorporating learning goals about modeling into an upper-division physics laboratory experiment

    NASA Astrophysics Data System (ADS)

    Zwickl, Benjamin M.; Finkelstein, Noah; Lewandowski, H. J.

    2014-09-01

    Implementing a laboratory activity involves a complex interplay among learning goals, available resources, feedback about the existing course, best practices for teaching, and an overall philosophy about teaching labs. Building on our previous work, which described a process of transforming an entire lab course, we now turn our attention to how an individual lab activity on the polarization of light was redesigned to include a renewed emphasis on one broad learning goal: modeling. By using this common optics lab as a concrete case study of a broadly applicable approach, we highlight many aspects of the activity development and show how modeling is used to integrate sophisticated conceptual and quantitative reasoning into the experimental process through the various aspects of modeling: constructing models, making predictions, interpreting data, comparing measurements with predictions, and refining models. One significant outcome is a natural way to integrate an analysis and discussion of systematic error into a lab activity.

  8. A Decision Model for Supporting Task Allocation Processes in Global Software Development

    NASA Astrophysics Data System (ADS)

    Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter

    Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.

  9. Extending the timescale for using beryllium 7 measurements to document soil redistribution by erosion

    NASA Astrophysics Data System (ADS)

    Walling, D. E.; Schuller, P.; Zhang, Y.; Iroumé, A.

    2009-02-01

    The need for spatially distributed information on soil mobilization, transfer, and deposition within the landscape by erosion has focused attention on the potential for using fallout radionuclides (i.e., 137Cs, excess 210Pb, and 7Be) to document soil redistribution rates. Whereas 137Cs and excess 210Pb are used to estimate medium- and longer-term erosion rates (i.e., approximately 45 years and 100 years, respectively), 7Be, by virtue of its short half-life (53 days), provides potential for estimating short-term soil redistribution on bare soils. However, the approach commonly used with this radionuclide means that it can only be applied to individual events or short periods of heavy rain. In addition, it is also frequently difficult to ensure that the requirement for spatially uniform 7Be inventories across the study area immediately prior to the study period is met. If the existing approach is applied to longer periods with several rainfall events (e.g., several weeks or more) soil redistribution is likely to be substantially underestimated. These problems limit the potential for using the 7Be approach, particularly in investigations where there is a need to assemble representative information on soil redistribution occurring during the entire wet season. This paper reports the development of a new or refined model for converting radionuclide measurements to estimates of soil redistribution (conversion model) for use with 7Be measurements, which permits much longer periods to be studied. This refined model aims to retain much of the simplicity of the existing approach, but takes account of the temporal distribution of both 7Be fallout and erosion during the study period and of the evolution of the 7Be depth distribution during this period. The approach was successfully tested using 7Be measurements from a study of short-term soil redistribution undertaken within an area of recently harvested forest located near Valdivia in Southern Chile. The study period extended over about 3 months and included the main part of the winter wet season of 2006. The estimates of soil redistribution obtained using the new conversion model were consistent with those obtained from erosion pins deployed within the same study area and were two to three times greater than those obtained using the approach and conversion model employed in existing studies.

  10. A Variable Resolution Atmospheric General Circulation Model for a Megasite at the North Slope of Alaska

    NASA Astrophysics Data System (ADS)

    Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.

    2016-12-01

    The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.

  11. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  12. Implementation of local grid refinement (LGR) for the Lake Michigan Basin regional groundwater-flow model

    USGS Publications Warehouse

    Hoard, C.J.

    2010-01-01

    The U.S. Geological Survey is evaluating water availability and use within the Great Lakes Basin. This is a pilot effort to develop new techniques and methods to aid in the assessment of water availability. As part of the pilot program, a regional groundwater-flow model for the Lake Michigan Basin was developed using SEAWAT-2000. The regional model was used as a framework for assessing local-scale water availability through grid-refinement techniques. Two grid-refinement techniques, telescopic mesh refinement and local grid refinement, were used to illustrate the capability of the regional model to evaluate local-scale problems. An intermediate model was developed in central Michigan spanning an area of 454 square miles (mi2) using telescopic mesh refinement. Within the intermediate model, a smaller local model covering an area of 21.7 mi2 was developed and simulated using local grid refinement. Recharge was distributed in space and time using a daily output from a modified Thornthwaite-Mather soil-water-balance method. The soil-water-balance method derived recharge estimates from temperature and precipitation data output from an atmosphere-ocean coupled general-circulation model. The particular atmosphere-ocean coupled general-circulation model used, simulated climate change caused by high global greenhouse-gas emissions to the atmosphere. The surface-water network simulated in the regional model was refined and simulated using a streamflow-routing package for MODFLOW. The refined models were used to demonstrate streamflow depletion and potential climate change using five scenarios. The streamflow-depletion scenarios include (1) natural conditions (no pumping), (2) a pumping well near a stream; the well is screened in surficial glacial deposits, (3) a pumping well near a stream; the well is screened in deeper glacial deposits, and (4) a pumping well near a stream; the well is open to a deep bedrock aquifer. Results indicated that a range of 59 to 50 percent of the water pumped originated from the stream for the shallow glacial and deep bedrock pumping scenarios, respectively. The difference in streamflow reduction between the shallow and deep pumping scenarios was compensated for in the deep well by deriving more water from regional sources. The climate-change scenario only simulated natural conditions from 1991-2044, so there was no pumping stress simulated. Streamflows were calculated for the simulated period and indicated that recharge over the period generally increased from the start of the simulation until approximately 2017, and decreased from then to the end of the simulation. Streamflow was highly correlated with recharge so that the lowest streamflows occurred in the later stress periods of the model when recharge was lowest.

  13. Functional Linear Model with Zero-value Coefficient Function at Sub-regions.

    PubMed

    Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin

    2013-01-01

    We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.

  14. A Burst-Based “Hebbian” Learning Rule at Retinogeniculate Synapses Links Retinal Waves to Activity-Dependent Refinement

    PubMed Central

    Butts, Daniel A; Kanold, Patrick O; Shatz, Carla J

    2007-01-01

    Patterned spontaneous activity in the developing retina is necessary to drive synaptic refinement in the lateral geniculate nucleus (LGN). Using perforated patch recordings from neurons in LGN slices during the period of eye segregation, we examine how such burst-based activity can instruct this refinement. Retinogeniculate synapses have a novel learning rule that depends on the latencies between pre- and postsynaptic bursts on the order of one second: coincident bursts produce long-lasting synaptic enhancement, whereas non-overlapping bursts produce mild synaptic weakening. It is consistent with “Hebbian” development thought to exist at this synapse, and we demonstrate computationally that such a rule can robustly use retinal waves to drive eye segregation and retinotopic refinement. Thus, by measuring plasticity induced by natural activity patterns, synaptic learning rules can be linked directly to their larger role in instructing the patterning of neural connectivity. PMID:17341130

  15. REFINEMENT OF A MODEL TO PREDICT THE PERMEATION OF PROTECTIVE CLOTHING MATERIALS

    EPA Science Inventory

    A prototype of a predictive model for estimating chemical permeation through protective clothing materials was refined and tested. he model applies Fickian diffusion theory and predicts permeation rates and cumulative permeation as a function of time for five materials: butyl rub...

  16. A potential global soils data base

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; Joyce, A. T.; Hogg, H. C.

    1984-01-01

    A general procedure is outlined for refining the existing world soil maps from the existing 1:1 million scale to 1:250,000 through the interpretation of Landsat MSS and TM images, and the use of a Geographic Information System to relate the soils maps to available information on climate, topography, geology, and vegetation.

  17. Macromolecular refinement by model morphing using non-atomic parameterizations.

    PubMed

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  18. Coloured Petri Net Refinement Specification and Correctness Proof with Coq

    NASA Technical Reports Server (NTRS)

    Choppy, Christine; Mayero, Micaela; Petrucci, Laure

    2009-01-01

    In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.

  19. Correcting pervasive errors in RNA crystallography through enumerative structure prediction.

    PubMed

    Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju

    2013-01-01

    Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.

  20. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  1. The solvent component of macromolecular crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weichenberger, Christian X.; Afonine, Pavel V.; Kantardjieff, Katherine

    2015-04-30

    On average, the mother liquor or solvent and its constituents occupy about 50% of a macromolecular crystal. Ordered as well as disordered solvent components need to be accurately accounted for in modelling and refinement, often with considerable complexity. The mother liquor from which a biomolecular crystal is grown will contain water, buffer molecules, native ligands and cofactors, crystallization precipitants and additives, various metal ions, and often small-molecule ligands or inhibitors. On average, about half the volume of a biomolecular crystal consists of this mother liquor, whose components form the disordered bulk solvent. Its scattering contributions can be exploited in initialmore » phasing and must be included in crystal structure refinement as a bulk-solvent model. Concomitantly, distinct electron density originating from ordered solvent components must be correctly identified and represented as part of the atomic crystal structure model. Herein, are reviewed (i) probabilistic bulk-solvent content estimates, (ii) the use of bulk-solvent density modification in phase improvement, (iii) bulk-solvent models and refinement of bulk-solvent contributions and (iv) modelling and validation of ordered solvent constituents. A brief summary is provided of current tools for bulk-solvent analysis and refinement, as well as of modelling, refinement and analysis of ordered solvent components, including small-molecule ligands.« less

  2. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  3. Nutrient and suspended solids removal from petrochemical wastewater via microalgal biofilm cultivation.

    PubMed

    Hodges, Alan; Fica, Zachary; Wanlass, Jordan; VanDarlin, Jessica; Sims, Ronald

    2017-05-01

    Wastewater derived from petroleum refining currently accounts for 33.6 million barrels per day globally. Few wastewater treatment strategies exist to produce value-added products from petroleum refining wastewater. In this study, mixed culture microalgal biofilm-based treatment of petroleum refining wastewater using rotating algae biofilm reactors (RABRs) was compared with suspended-growth open pond lagoon reactors for removal of nutrients and suspended solids. Triplicate reactors were operated for 12 weeks and were continuously fed with petroleum refining wastewater. Effluent wastewater was monitored for nitrogen, phosphorus, total suspended solids (TSS), and chemical oxygen demand (COD). RABR treatment demonstrated a statistically significant increase in removal of nutrients and suspended solids, and increase in biomass productivity, compared to the open pond lagoon treatment. These trends translate to a greater potential for the production of biomass-based fuels, feed, and fertilizer as value-added products. This study is the first demonstration of the cultivation of mixed culture biofilm microalgae on petroleum refining wastewater for the dual purposes of treatment and biomass production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madland, D. G.; Kahler, A. C.

    This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less

  5. Refinement of regression models to estimate real-time concentrations of contaminants in the Menomonee River drainage basin, southeast Wisconsin, 2008-11

    USGS Publications Warehouse

    Baldwin, Austin K.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher

    2013-01-01

    In 2008, the U.S. Geological Survey and the Milwaukee Metropolitan Sewerage District initiated a study to develop regression models to estimate real-time concentrations and loads of chloride, suspended solids, phosphorus, and bacteria in streams near Milwaukee, Wisconsin. To collect monitoring data for calibration of models, water-quality sensors and automated samplers were installed at six sites in the Menomonee River drainage basin. The sensors continuously measured four potential explanatory variables: water temperature, specific conductance, dissolved oxygen, and turbidity. Discrete water-quality samples were collected and analyzed for five response variables: chloride, total suspended solids, total phosphorus, Escherichia coli bacteria, and fecal coliform bacteria. Using the first year of data, regression models were developed to continuously estimate the response variables on the basis of the continuously measured explanatory variables. Those models were published in a previous report. In this report, those models are refined using 2 years of additional data, and the relative improvement in model predictability is discussed. In addition, a set of regression models is presented for a new site in the Menomonee River Basin, Underwood Creek at Wauwatosa. The refined models use the same explanatory variables as the original models. The chloride models all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity. Total suspended solids and total phosphorus models used turbidity as the only explanatory variable, and bacteria models used water temperature and turbidity as explanatory variables. An analysis of covariance (ANCOVA), used to compare the coefficients in the original models to those in the refined models calibrated using all of the data, showed that only 3 of the 25 original models changed significantly. Root-mean-squared errors (RMSEs) calculated for both the original and refined models using the entire dataset showed a median improvement in RMSE of 2.1 percent, with a range of 0.0–13.9 percent. Therefore most of the original models did almost as well at estimating concentrations during the validation period (October 2009–September 2011) as the refined models, which were calibrated using those data. Application of these refined models can produce continuously estimated concentrations of chloride, total suspended solids, total phosphorus, E. coli bacteria, and fecal coliform bacteria that may assist managers in quantifying the effects of land-use changes and improvement projects, establish total maximum daily loads, and enable better informed decision making in the future.

  6. Tying it all together--The PASS to Success: a comprehensive look at promoting job retention for workers with psychiatric disabilities in a supported employment program.

    PubMed

    Dorio, JoAnn

    2004-01-01

    Job initiation rates are steadily improving for people with severe and persistent mental illnesses. Yet, job retention rates, especially for those individuals who historically have had difficulty maintaining employment, continue to concern vocational rehabilitation professionals. In this paper, the author develops and refines her ideas that were presented in a previous research paper titled "Differences in Job Retention in a Supported Employment Program, Chinook Clubhouse." A more complete model, "The PASS to Success," is suggested by incorporating existing research with the author's revised work. Components of the model (Placement, Attitude, Support, Skills), can be used to predict vocational success and promote job retention.

  7. High Resolution Visualization Applied to Future Heavy Airlift Concept Development and Evaluation

    NASA Technical Reports Server (NTRS)

    FordCook, A. B.; King, T.

    2012-01-01

    This paper explores the use of high resolution 3D visualization tools for exploring the feasibility and advantages of future military cargo airlift concepts and evaluating compatibility with existing and future payload requirements. Realistic 3D graphic representations of future airlifters are immersed in rich, supporting environments to demonstrate concepts of operations to key personnel for evaluation, feedback, and development of critical joint support. Accurate concept visualizations are reviewed by commanders, platform developers, loadmasters, soldiers, scientists, engineers, and key principal decision makers at various stages of development. The insight gained through the review of these physically and operationally realistic visualizations is essential to refining design concepts to meet competing requirements in a fiscally conservative defense finance environment. In addition, highly accurate 3D geometric models of existing and evolving large military vehicles are loaded into existing and proposed aircraft cargo bays. In this virtual aircraft test-loading environment, materiel developers, engineers, managers, and soldiers can realistically evaluate the compatibility of current and next-generation airlifters with proposed cargo.

  8. Use of Bayesian Inference in Crystallographic Structure Refinement via Full Diffraction Profile Analysis

    PubMed Central

    Fancher, Chris M.; Han, Zhen; Levin, Igor; Page, Katharine; Reich, Brian J.; Smith, Ralph C.; Wilson, Alyson G.; Jones, Jacob L.

    2016-01-01

    A Bayesian inference method for refining crystallographic structures is presented. The distribution of model parameters is stochastically sampled using Markov chain Monte Carlo. Posterior probability distributions are constructed for all model parameters to properly quantify uncertainty by appropriately modeling the heteroskedasticity and correlation of the error structure. The proposed method is demonstrated by analyzing a National Institute of Standards and Technology silicon standard reference material. The results obtained by Bayesian inference are compared with those determined by Rietveld refinement. Posterior probability distributions of model parameters provide both estimates and uncertainties. The new method better estimates the true uncertainties in the model as compared to the Rietveld method. PMID:27550221

  9. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less

  10. Far-infrared laser magnetic resonance of vibrationally excited CD2

    NASA Technical Reports Server (NTRS)

    Evenson, K. M.; Sears, T. J.; Mckellar, A. R. W.

    1984-01-01

    The detection of 13 rotational transitions in the first excited bending state (010) of CD2 using the technique of far-infrared laser magnetic resonance spectroscopy is reported. Molecular parameters for this state are determined from these new data together with existing infrared observations of the v(2) band. Additional information on the ground vibrational state (000) is also provided by the observation of a new rotational transition, and this is combined with existing data to provide a refined set of molecular parameters for the CD2 ground state. One spectrum has been observed that is assigned as a rotational transition within the first excited symmetric stretching state (100) of CD2. These data will be of use in refining the structure and the potential function of the methylene radical.

  11. A Conceptual Model of Career Development to Enhance Academic Motivation

    ERIC Educational Resources Information Center

    Collins, Nancy Creighton

    2010-01-01

    The purpose of this study was to develop, refine, and validate a conceptual model of career development to enhance the academic motivation of community college students. To achieve this end, a straw model was built from the theoretical and empirical research literature. The model was then refined and validated through three rounds of a Delphi…

  12. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  13. 3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.

    2016-03-15

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less

  14. NASA Integrated Network Monitor and Control Software Architecture

    NASA Technical Reports Server (NTRS)

    Shames, Peter; Anderson, Michael; Kowal, Steve; Levesque, Michael; Sindiy, Oleg; Donahue, Kenneth; Barnes, Patrick

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Space Communications and Navigation office (SCaN) has commissioned a series of trade studies to define a new architecture intended to integrate the three existing networks that it operates, the Deep Space Network (DSN), Space Network (SN), and Near Earth Network (NEN), into one integrated network that offers users a set of common, standardized, services and interfaces. The integrated monitor and control architecture utilizes common software and common operator interfaces that can be deployed at all three network elements. This software uses state-of-the-art concepts such as a pool of re-programmable equipment that acts like a configurable software radio, distributed hierarchical control, and centralized management of the whole SCaN integrated network. For this trade space study a model-based approach using SysML was adopted to describe and analyze several possible options for the integrated network monitor and control architecture. This model was used to refine the design and to drive the costing of the four different software options. This trade study modeled the three existing self standing network elements at point of departure, and then described how to integrate them using variations of new and existing monitor and control system components for the different proposed deployments under consideration. This paper will describe the trade space explored, the selected system architecture, the modeling and trade study methods, and some observations on useful approaches to implementing such model based trade space representation and analysis.

  15. Template-based modeling and ab initio refinement of protein oligomer structures using GALAXY in CAPRI round 30.

    PubMed

    Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok

    2017-03-01

    Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  17. Disruptive technologies and transportation : final report.

    DOT National Transportation Integrated Search

    2016-06-01

    Disruptive technologies refer to innovations that, at first, may be considered unproven, lacking refinement, relatively unknown, or even impractical, but ultimately they supplant existing technologies and/or applications. In general, disruptive techn...

  18. Micromechanical predictions of crack propagation and fracture energy in a single fiber boron/aluminum model composite

    NASA Technical Reports Server (NTRS)

    Adams, D. F.; Mahishi, J. M.

    1982-01-01

    The axisymmetric finite element model and associated computer program developed for the analysis of crack propagation in a composite consisting of a single broken fiber in an annular sheath of matrix material was extended to include a constant displacement boundary condition during an increment of crack propagation. The constant displacement condition permits the growth of a stable crack, as opposed to the catastropic failure in an earlier version. The finite element model was refined to respond more accurately to the high stresses and steep stress gradients near the broken fiber end. The accuracy and effectiveness of the conventional constant strain axisymmetric element for crack problems was established by solving the classical problem of a penny-shaped crack in a thick cylindrical rod under axial tension. The stress intensity factors predicted by the present finite element model are compared with existing continuum results.

  19. Overcoming the sign problem at finite temperature: Quantum tensor network for the orbital eg model on an infinite square lattice

    NASA Astrophysics Data System (ADS)

    Czarnik, Piotr; Dziarmaga, Jacek; Oleś, Andrzej M.

    2017-07-01

    The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied to a model suffering the notorious quantum Monte Carlo sign problem—the orbital eg model with spatially highly anisotropic orbital interactions. Coarse graining of the tensor network along the inverse temperature β yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension D —limiting the amount of entanglement—is a natural refinement parameter. Increasing D we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature Tc, provide a numerically exact estimate of Tc, and give the critical exponents within 1 % of the 2D Ising universality class.

  20. Preliminary Development of an Informational Media Use Measure for Patients with Implanted Defibrillators: Toward a Model of Social-Ecological Assessment of Patient Education and Support.

    PubMed

    Knoepke, Christopher E; Matlock, Daniel D

    2017-11-01

    Social work interventions in health care, particularly those that involve working with people being treated for chronic and life-threatening conditions, frequently involve efforts to educate patients about their disease, treatment options, safety planning, medical adherence, and other associated issues. Despite an intuitive notion that patients access information about all of these issues through a variety of media-both inside and outside the clinical encounter, created by professionals and by others-there currently exists no validated means of assessing patients' use of these forms of information. To address this gap, authors first created candidate item measures with input from both physicians and a small group of diverse patients who currently have an implantable cardioverter defibrillator (ICD), a sophisticated cardiac device for which a trajectory model of social work intervention was recently outlined. Authors then surveyed a group of 205 individuals who have these devices, assessing their use of various media to learn about ICDs. They then conducted factor and item analysis to refine and remove poorly performing items while delineating forms of media use by type. The resultant preliminary measure of informational media use can be further refined and adapted for use with any clinical population. © 2017 National Association of Social Workers.

  1. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  2. Symmetry breaking in tensor models

    NASA Astrophysics Data System (ADS)

    Benedetti, Dario; Gurau, Razvan

    2015-11-01

    In this paper we analyze a quartic tensor model with one interaction for a tensor of arbitrary rank. This model has a critical point where a continuous limit of infinitely refined random geometries is reached. We show that the critical point corresponds to a phase transition in the tensor model associated to a breaking of the unitary symmetry. We analyze the model in the two phases and prove that, in a double scaling limit, the symmetric phase corresponds to a theory of infinitely refined random surfaces, while the broken phase corresponds to a theory of infinitely refined random nodal surfaces. At leading order in the double scaling limit planar surfaces dominate in the symmetric phase, and planar nodal surfaces dominate in the broken phase.

  3. A method for evaluating the importance of system state observations to model predictions, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.; O'Brien, Grady M.

    2004-01-01

    We develop a new observation‐prediction (OPR) statistic for evaluating the importance of system state observations to model predictions. The OPR statistic measures the change in prediction uncertainty produced when an observation is added to or removed from an existing monitoring network, and it can be used to guide refinement and enhancement of the network. Prediction uncertainty is approximated using a first‐order second‐moment method. We apply the OPR statistic to a model of the Death Valley regional groundwater flow system (DVRFS) to evaluate the importance of existing and potential hydraulic head observations to predicted advective transport paths in the saturated zone underlying Yucca Mountain and underground testing areas on the Nevada Test Site. Important existing observations tend to be far from the predicted paths, and many unimportant observations are in areas of high observation density. These results can be used to select locations at which increased observation accuracy would be beneficial and locations that could be removed from the network. Important potential observations are mostly in areas of high hydraulic gradient far from the paths. Results for both existing and potential observations are related to the flow system dynamics and coarse parameter zonation in the DVRFS model. If system properties in different locations are as similar as the zonation assumes, then the OPR results illustrate a data collection opportunity whereby observations in distant, high‐gradient areas can provide information about properties in flatter‐gradient areas near the paths. If this similarity is suspect, then the analysis produces a different type of data collection opportunity involving testing of model assumptions critical to the OPR results.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrnstein, Aaron R.

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less

  5. Predictive Software Cost Model Study. Volume I. Final Technical Report.

    DTIC Science & Technology

    1980-06-01

    development phase to identify computer resources necessary to support computer programs after transfer of program manangement responsibility and system... classical model development with refinements specifically applicable to avionics systems. The refinements are the result of the Phase I literature search

  6. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  7. 10 CFR 626.4 - General acquisition strategy.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... current level of private inventories; (3) Days of net import protection; (4) Current price levels for...) Existing or potential disruptions in supply or refining capability; (7) The level of market volatility; (8...

  8. 10 CFR 626.4 - General acquisition strategy.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... current level of private inventories; (3) Days of net import protection; (4) Current price levels for...) Existing or potential disruptions in supply or refining capability; (7) The level of market volatility; (8...

  9. 10 CFR 626.4 - General acquisition strategy.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... current level of private inventories; (3) Days of net import protection; (4) Current price levels for...) Existing or potential disruptions in supply or refining capability; (7) The level of market volatility; (8...

  10. 10 CFR 626.4 - General acquisition strategy.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... current level of private inventories; (3) Days of net import protection; (4) Current price levels for...) Existing or potential disruptions in supply or refining capability; (7) The level of market volatility; (8...

  11. Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B

    NASA Technical Reports Server (NTRS)

    Yeganefard, Sanaz; Butler, Michael; Rezazadeh, Abdolbaghi

    2010-01-01

    Recently a set of guidelines, or cookbook, has been developed for modelling and refinement of control problems in Event-B. The Event-B formal method is used for system-level modelling by defining states of a system and events which act on these states. It also supports refinement of models. This cookbook is intended to systematize the process of modelling and refining a control problem system by distinguishing environment, controller and command phenomena. Our main objective in this paper is to investigate and evaluate the usefulness and effectiveness of this cookbook by following it throughout the formal modelling of cruise control system found in cars. The outcomes are identifying the benefits of the cookbook and also giving guidance to its future users.

  12. Application of the Refined Zigzag Theory to the Modeling of Delaminations in Laminated Composites

    NASA Technical Reports Server (NTRS)

    Groh, Rainer M. J.; Weaver, Paul M.; Tessler, Alexander

    2015-01-01

    The Refined Zigzag Theory is applied to the modeling of delaminations in laminated composites. The commonly used cohesive zone approach is adapted for use within a continuum mechanics model, and then used to predict the onset and propagation of delamination in five cross-ply composite beams. The resin-rich area between individual composite plies is modeled explicitly using thin, discrete layers with isotropic material properties. A damage model is applied to these resin-rich layers to enable tracking of delamination propagation. The displacement jump across the damaged interfacial resin layer is captured using the zigzag function of the Refined Zigzag Theory. The overall model predicts the initiation of delamination to within 8% compared to experimental results and the load drop after propagation is represented accurately.

  13. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  14. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    PubMed Central

    Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva

    2018-01-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953

  15. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Fang; Ward, Logan; Williams, Travis

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  16. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE PAGES

    Ren, Fang; Ward, Logan; Williams, Travis; ...

    2018-04-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  17. Leaf Segmentation and Tracking in Arabidopsis thaliana Combined to an Organ-Scale Plant Model for Genotypic Differentiation

    PubMed Central

    Viaud, Gautier; Loudet, Olivier; Cournède, Paul-Henry

    2017-01-01

    A promising method for characterizing the phenotype of a plant as an interaction between its genotype and its environment is to use refined organ-scale plant growth models that use the observation of architectural traits, such as leaf area, containing a lot of information on the whole history of the functioning of the plant. The Phenoscope, a high-throughput automated platform, allowed the acquisition of zenithal images of Arabidopsis thaliana over twenty one days for 4 different genotypes. A novel image processing algorithm involving both segmentation and tracking of the plant leaves allows to extract areas of the latter. First, all the images in the series are segmented independently using a watershed-based approach. A second step based on ellipsoid-shaped leaves is then applied on the segments found to refine the segmentation. Taking into account all the segments at every time, the whole history of each leaf is reconstructed by choosing recursively through time the most probable segment achieving the best score, computed using some characteristics of the segment such as its orientation, its distance to the plant mass center and its area. These results are compared to manually extracted segments, showing a very good accordance in leaf rank and that they therefore provide low-biased data in large quantity for leaf areas. Such data can therefore be exploited to design an organ-scale plant model adapted from the existing GreenLab model for A. thaliana and subsequently parameterize it. This calibration of the model parameters should pave the way for differentiation between the Arabidopsis genotypes. PMID:28123392

  18. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  19. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  20. Applications of a thermal-based two-source energy balance model using Priestley-Taylor approach for surface temperature partitioning under advective conditions

    NASA Astrophysics Data System (ADS)

    Song, Lisheng; Kustas, William P.; Liu, Shaomin; Colaizzi, Paul D.; Nieto, Hector; Xu, Ziwei; Ma, Yanfei; Li, Mingsong; Xu, Tongren; Agam, Nurit; Tolk, Judy A.; Evett, Steven R.

    2016-09-01

    In this study ground measured soil and vegetation component temperatures and composite temperature from a high spatial resolution thermal camera and a network of thermal-IR sensors collected in an irrigated maize field and in an irrigated cotton field are used to assess and refine the component temperature partitioning approach in the Two-Source Energy Balance (TSEB) model. A refinement to TSEB using a non-iterative approach based on the application of the Priestley-Taylor formulation for surface temperature partitioning and estimating soil evaporation from soil moisture observations under advective conditions (TSEB-A) was developed. This modified TSEB formulation improved the agreement between observed and modeled soil and vegetation temperatures. In addition, the TSEB-A model output of evapotranspiration (ET) and the components evaporation (E), transpiration (T) when compared to ground observations using the stable isotopic method and eddy covariance (EC) technique from the HiWATER experiment and with microlysimeters and a large monolithic weighing lysimeter from the BEAREX08 experiment showed good agreement. Difference between the modeled and measured ET measurements were less than 10% and 20% on a daytime basis for HiWATER and BEAREX08 data sets, respectively. The TSEB-A model was found to accurately reproduce the temporal dynamics of E, T and ET over a full growing season under the advective conditions existing for these irrigated crops located in arid/semi-arid climates. With satellite data this TSEB-A modeling framework could potentially be used as a tool for improving water use efficiency and conservation practices in water limited regions. However, TSEB-A requires soil moisture information which is not currently available routinely from satellite at the field scale.

  1. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  2. Mineral resource of the month: soda ash

    USGS Publications Warehouse

    Kostic, Dennis S.

    2006-01-01

    Soda ash, also known as sodium carbonate, is an alkali chemical that can be refined from the mineral trona and from sodium carbonate-bearing brines. Several chemical processes exist for manufacturing synthetic soda ash.

  3. MAIN software for density averaging, model building, structure refinement and validation

    PubMed Central

    Turk, Dušan

    2013-01-01

    MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458

  4. Improved Impact Hazard Assessment with Existing Radar Sites and a New 70-m Southern Hemisphere Radar Installation

    NASA Technical Reports Server (NTRS)

    Giorgini, J. D.; Slade, M. A.; Silva, A.; Preston, R. A.; Brozovic, M.; Taylor, P. A.; Magri, C.

    2009-01-01

    Add radar capability to the existing southern hemisphere 70-m Deep Space Network (DSN) site at Canberra, Australia, thereby increasing by 1.5-2x the observing time available for high-precision NEO trajectory refinement and characterization. Estimated cost: approx.$16 million over 3 years, $2.5 million/year for operations (FY09).

  5. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.

    2014-05-01

    Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM)more » program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.« less

  6. The Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram refinement, Bi{sub 3}FeSb{sub 2}O{sub 11} structure peculiarities and magnetic properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egorysheva, A.V., E-mail: anna_egorysheva@rambler.ru; Ellert, O.G.; Gajtko, O.M.

    2015-05-15

    The refinement of the Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram has been performed and the existence of the two ternary compounds has been confirmed. The first one with a pyrochlore-type structure (sp. gr. Fd 3-barm) exists in the wide solid solution region, (Bi{sub 2−x}Fe{sub x})Fe{sub 1+y}Sb{sub 1−y}O{sub 7±δ}, where x=0.1–0.4 and y=−0.13–0.11. The second one, Bi{sub 3}FeSb{sub 2}O{sub 11}, corresponds to the cubic KSbO{sub 3}-type structure (sp. gr. Pn 3-bar) with unit cell parameter a=9.51521(2) Å. The Rietveld structure refinement showed that this compound is characterized by disordered structure. The Bi{sub 3}FeSb{sub 2}O{sub 11} factor groupmore » analysis has been carried out and a Raman spectrum has been investigated. According to magnetization measurements performed at the temperature range 2–300 K it may be concluded that the Bi{sub 3}FeSb{sub 2}O{sub 11} magnetic properties can be substantially described as a superposition of strong short-range antiferromagnetic exchange interactions realizing inside the [(FeSb{sub 2})O{sub 9}] 3D-framework via different pathways. - Graphical abstract: The refinement of the Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram has been performed and the existence of the solid solution with a pyrochlore-type structure (sp. gr. Fd 3-barm) and Bi{sub 3}FeSb{sub 2}O{sub 11}, correspond of the cubic KSbO{sub 3}-type structure (sp. gr. Pn 3-bar has been confirmed. The structure refinement, Raman spectroscopy as well as magnetic measurements data of Bi{sub 3}FeSb{sub 2}O{sub 11} are presented. - Highlights: • The Bi{sub 2}O{sub 3}–Fe{sub 2}O{sub 3}–Sb{sub 2}O{sub 5} system phase diagram refinement has been performed. • The Bi{sub 3}FeSb{sub 2}O{sub 11} existence along with pyrochlore structure compound is shown. • It was determined that the Bi{sub 3}FeSb{sub 2}O{sub 11} is of disordered cubic KSbO{sub 3}-type structure. • Factor group analysis of Bi{sub 3}FeSb{sub 2}O{sub 11} vibrational spectrum has been performed. • Short-range antiferromagnetic interactions govern Bi{sub 3}FeSb{sub 2}O{sub 11} magnetic behavior.« less

  7. Simulation of the shallow groundwater-flow system near the Hayward Airport, Sawyer County, Wisconsin

    USGS Publications Warehouse

    Hunt, Randall J.; Juckem, Paul F.; Dunning, Charles P.

    2010-01-01

    There are concerns that removal and trimming of vegetation during expansion of the Hayward Airport in Sawyer County, Wisconsin, could appreciably change the character of a nearby cold-water stream and its adjacent environs. In cooperation with the Wisconsin Department of Transportation, a two-dimensional, steady-state groundwater-flow model of the shallow groundwater-flow system near the Hayward Airport was refined from a regional model of the area. The parameter-estimation code PEST was used to obtain a best fit of the model to additional field data collected in February 2007 as part of this study. The additional data were collected during an extended period of low runoff and consisted of water levels and streamflows near the Hayward Airport. Refinements to the regional model included one additional hydraulic-conductivity zone for the airport area, and three additional parameters for streambed resistance in a northern tributary to the Namekagon River and in the main stem of the Namekagon River. In the refined Hayward Airport area model, the calibrated hydraulic conductivity was 11.2 feet per day, which is within the 58.2 to 7.9 feet per day range reported for the regional glacial and sandstone aquifer, and is consistent with a silty soil texture for the area. The calibrated refined model had a best fit of 8.6 days for the streambed resistance of the Namekagon River and between 0.6 and 1.6 days for the northern tributary stream. The previously reported regional groundwater-recharge rate of 10.1 inches per year was adjusted during calibration of the refined model in order to match streamflows measured during the period of extended low runoff; this resulted in an optimal groundwater-recharge rate of 7.1 inches per year during this period. The refined model was then used to simulate the capture zone of the northern tributary to the Namekagon River.

  8. Refined open intersection numbers and the Kontsevich-Penner matrix model

    NASA Astrophysics Data System (ADS)

    Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.

    2017-03-01

    A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.

  9. Influence of Trabecular Bone on Peri-Implant Stress and Strain Based on Micro-CT Finite Element Modeling of Beagle Dog

    PubMed Central

    Liao, Sheng-hui; Zhu, Xing-hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi

    2016-01-01

    The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers. PMID:27403424

  10. Influence of Trabecular Bone on Peri-Implant Stress and Strain Based on Micro-CT Finite Element Modeling of Beagle Dog.

    PubMed

    Liao, Sheng-Hui; Zhu, Xing-Hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi

    2016-01-01

    The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers.

  11. Hydrogen ADPs with Cu Kα data? Invariom and Hirshfeld atom modelling of fluconazole.

    PubMed

    Orben, Claudia M; Dittrich, Birger

    2014-06-01

    For the structure of fluconazole [systematic name: 2-(2,4-difluorophenyl)-1,3-bis(1H-1,2,4-triazol-1-yl)propan-2-ol] monohydrate, C13H12F2N6O·H2O, a case study on different model refinements is reported, based on single-crystal X-ray diffraction data measured at 100 K with Cu Kα radiation to a resolution of sin θ/λ of 0.6 Å(-1). The structure, anisotropic displacement parameters (ADPs) and figures of merit from the independent atom model are compared to `invariom' and `Hirshfeld atom' refinements. Changing from a spherical to an aspherical atom model lowers the figures of merit and improves both the accuracy and the precision of the geometrical parameters. Differences between results from the two aspherical-atom refinements are small. However, a refinement of ADPs for H atoms is only possible with the Hirshfeld atom density model. It gives meaningful results even at a resolution of 0.6 Å(-1), but requires good low-order data.

  12. Automating Guidelines for Clinical Decision Support: Knowledge Engineering and Implementation.

    PubMed

    Tso, Geoffrey J; Tu, Samson W; Oshiro, Connie; Martins, Susana; Ashcraft, Michael; Yuen, Kaeli W; Wang, Dan; Robinson, Amy; Heidenreich, Paul A; Goldstein, Mary K

    2016-01-01

    As utilization of clinical decision support (CDS) increases, it is important to continue the development and refinement of methods to accurately translate the intention of clinical practice guidelines (CPG) into a computable form. In this study, we validate and extend the 13 steps that Shiffman et al. 5 identified for translating CPG knowledge for use in CDS. During an implementation project of ATHENA-CDS, we encoded complex CPG recommendations for five common chronic conditions for integration into an existing clinical dashboard. Major decisions made during the implementation process were recorded and categorized according to the 13 steps. During the implementation period, we categorized 119 decisions and identified 8 new categories required to complete the project. We provide details on an updated model that outlines all of the steps used to translate CPG knowledge into a CDS integrated with existing health information technology.

  13. The evolution of integration: innovations in clinical skills and ethics in first year medicine.

    PubMed

    Brunger, Fern; Duke, Pauline S

    2012-01-01

    Critical self-reflection, medical ethics and clinical skills are each important components of medical education but are seldom linked in curriculum development. We developed a curriculum that builds on the existing integration of ethics education into the clinical skills course to more explicitly link these three skills. The curriculum builds on the existing integration of clinical skills and ethics in first year medicine. It refines the integration through scheduling changes; adds case studies that emphasise the social, economic and political context of our province's patient population; and introduces reflection on the "culture of medicine" as a way to have students articulate and understand their own values and moral decision making frameworks. This structured Clinical Skills course is a model for successfully integrating critical self-reflection, reflection on the political, economic and cultural contexts shaping health and healthcare, and moral decision making into clinical skills training.

  14. Geohydrology of, and simulation of ground-water flow in, the Milford-Souhegan glacial-drift aquifer, Milford, New Hampshire

    USGS Publications Warehouse

    Harte, P.T.; Mack, Thomas J.

    1992-01-01

    Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.

  15. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multi-zone Reaction Kinetics: Model Derivation and Validation

    NASA Astrophysics Data System (ADS)

    Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun

    2018-04-01

    A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.

  16. Assessment of Energy Efficiency Improvement in the United States Petroleum Refining Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, William R.; Marano, John; Sathaye, Jayant

    2013-02-01

    Adoption of efficient process technologies is an important approach to reducing CO 2 emissions, in particular those associated with combustion. In many cases, implementing energy efficiency measures is among the most cost-effective approaches that any refiner can take, improving productivity while reducing emissions. Therefore, careful analysis of the options and costs associated with efficiency measures is required to establish sound carbon policies addressing global climate change, and is the primary focus of LBNL’s current petroleum refining sector analysis for the U.S. Environmental Protection Agency. The analysis is aimed at identifying energy efficiency-related measures and developing energy abatement supply curves andmore » CO 2 emissions reduction potential for the U.S. refining industry. A refinery model has been developed for this purpose that is a notional aggregation of the U.S. petroleum refining sector. It consists of twelve processing units and account s for the additional energy requirements from steam generation, hydrogen production and water utilities required by each of the twelve processing units. The model is carbon and energy balanced such that crud e oil inputs and major refinery sector outputs (fuels) are benchmarked to 2010 data. Estimates of the current penetration for the identified energy efficiency measures benchmark the energy requirements to those reported in U.S. DOE 2010 data. The remaining energy efficiency potential for each of the measures is estimated and compared to U.S. DOE fuel prices resulting in estimates of cost- effective energy efficiency opportunities for each of the twelve major processes. A combined cost of conserved energy supply curve is also presented along with the CO 2 emissions abatement opportunities that exist in the U.S. petroleum refinery sector. Roughly 1,200 PJ per year of primary fuels savings and close to 500 GWh per y ear of electricity savings are potentially cost-effective given U.S. DOE fuel price forecasts. This represents roughly 70 million metric tonnes of CO 2 emission reductions assuming 2010 emissions factor for grid electricity. Energy efficiency measures resulting in an additional 400 PJ per year of primary fuels savings and close to 1,700 GWh per year of electricity savings, and an associated 24 million metric tonnes of CO 2 emission reductions are not cost-effective given the same assumption with respect to fuel prices and electricity emissions factors. Compared to the modeled energy requirements for the U.S. petroleum refining sector, the cost effective potential represents a 40% reduction in fuel consumption and a 2% reduction in electricity consumption. The non-cost-effective potential represents an additional 13% reduction in fuel consumption and an additional 7% reduction in electricity consumption. The relative energy reduction potentials are mu ch higher for fuel consumption than electricity consumption largely in part because fuel is the primary energy consumption type in the refineries. Moreover, many cost effective fuel savings measures would increase electricity consumption. The model also has the potential to be used to examine the costs and benefits of the other CO 2 mitigation options, such as combined heat and power (CHP), carbon capture, and the potential introduction of biomass feedstocks. However, these options are not addressed in this report as this report is focused on developing the modeling methodology and assessing fuels savings measures. These opportunities to further reduce refinery sector CO 2 emissions and are recommended for further research and analysis.« less

  17. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package.

    PubMed

    Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M

    2014-05-01

    Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.

  18. GWM-2005 - A Groundwater-Management Process for MODFLOW-2005 with Local Grid Refinement (LGR) Capability

    USGS Publications Warehouse

    Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.

    2009-01-01

    This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.

  19. Empirical relations between large wood transport and catchment characteristics

    NASA Astrophysics Data System (ADS)

    Steeb, Nicolas; Rickenmann, Dieter; Rickli, Christian; Badoux, Alexandre

    2017-04-01

    The transport of vast amounts of large wood (LW) in water courses can considerably aggravate hazardous situations during flood events, and often strongly affects resulting flood damage. Large wood recruitment and transport are controlled by various factors which are difficult to assess and the prediction of transported LW volumes is difficult. Such information are, however, important for engineers and river managers to adequately dimension retention structures or to identify critical stream cross-sections. In this context, empirical formulas have been developed to estimate the volume of transported LW during a flood event (Rickenmann, 1997; Steeb et al., 2017). The data base of existing empirical wood load equations is, however, limited. The objective of the present study is to test and refine existing empirical equations, and to derive new relationships to reveal trends in wood loading. Data have been collected for flood events with LW occurrence in Swiss catchments of various sizes. This extended data set allows us to derive statistically more significant results. LW volumes were found to be related to catchment and transport characteristics, such as catchment size, forested area, forested stream length, water discharge, sediment load, or Melton ratio. Both the potential wood load and the fraction that is effectively mobilized during a flood event (effective wood load) are estimated. The difference of potential and effective wood load allows us to derive typical reduction coefficients that can be used to refine spatially explicit GIS models for potential LW recruitment.

  20. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.

  1. TLS from fundamentals to practice

    PubMed Central

    Urzhumtsev, Alexandre; Afonine, Pavel V.; Adams, Paul D.

    2014-01-01

    The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided. PMID:25249713

  2. The integration of palaeogeography and tectonics in refining plate tectonic models: an example from SE Asia

    NASA Astrophysics Data System (ADS)

    Masterton, S. M.; Markwick, P.; Bailiff, R.; Campanile, D.; Edgecombe, E.; Eue, D.; Galsworthy, A.; Wilson, K.

    2012-04-01

    Our understanding of lithospheric evolution and global plate motions throughout the Earth's history is based largely upon detailed knowledge of plate boundary structures, inferences about tectonic regimes, ocean isochrons and palaeomagnetic data. Most currently available plate models are either regionally restricted or do not consider palaeogeographies in their construction. Here, we present an integrated methodology in which derived hypotheses have been further refined using global and regional palaeogeographic, palaeotopological and palaeobathymetric maps. Iteration between our self-consistent and structurally constrained global plate model and palaeogeographic interpretations which are built on these reconstructions, allows for greater testing and refinement of results. Our initial structural and tectonic interpretations are based largely on analysis of our extensive global database of gravity and magnetic potential field data, and are further constrained by seismic, SRTM and Landsat data. This has been used as the basis for detailed interpretations that have allowed us to compile a new global map and database of structures, crustal types, plate boundaries and basin definitions. Our structural database is used in the identification of major tectonic terranes and their relative motions, from which we have developed our global plate model. It is subject to an ongoing process of regional evaluation and revisions in an effort to incorporate and reflect new tectonic and geologic interpretations. A major element of this programme is the extension of our existing plate model (GETECH Global Plate Model V1) back to the Neoproterozic. Our plate model forms the critical framework upon which palaeogeographic and palaeotopographic reconstructions have been made for every time stage in the Cretaceous and Cenozoic. Generating palaeogeographies involves integration of a variety of data, such as regional geology, palaeoclimate analyses, lithology, sea-level estimates, thermo-mechanical events and regional tectonics. These data are interpreted to constrain depositional systems and tectonophysiographic terranes. Palaeotopography and palaeobathymetry are derived from these tectonophysiographic terranes and depositional systems, and are further constrained using geological relationships, thermochronometric data, palaeoaltimetry indicators and modern analogues. Throughout this process, our plate model is iteratively tested against our palaeogeographies and their environmental consequences. Both the plate model and the palaeogeographies are refined until we have obtained a consistent and scientifically robust result. In this presentation we show an example from Southeast Asia, where the plate model complexity and wide variation in hypotheses has huge implications for the palaeogeographic interpretation, which can then be tested using geological observations from well and seismic data. For example, the Khorat Plateau Basin, Northeastern Thailand, comprises a succession of fluvial clastics during the Cretaceous, which include the evaporites of the Maha Sarakham Formation. These have been variously interpreted as indicative of saline lake or marine incursion depositional environments. We show how the feasibility of these different hypotheses is dependent on the regional palaeogeography (whether a marine link is possible), which in turn depends on the underlying plate model. We show two models with widely different environmental consequences. A more robust model that takes into account all these consequences, as well as data, can be defined by iterating through the consequences of the plate model and geological observations.

  3. Application and histology-driven refinement of active contour models to functional region and nerve delineation: towards a digital brainstem atlas

    NASA Astrophysics Data System (ADS)

    Patel, Nirmal; Sultana, Sharmin; Rashid, Tanweer; Krusienski, Dean; Audette, Michel A.

    2015-03-01

    This paper presents a methodology for the digital formatting of a printed atlas of the brainstem and the delineation of cranial nerves from this digital atlas. It also describes on-going work on the 3D resampling and refinement of the 2D functional regions and nerve contours. In MRI-based anatomical modeling for neurosurgery planning and simulation, the complexity of the functional anatomy entails a digital atlas approach, rather than less descriptive voxel or surface-based approaches. However, there is an insufficiency of descriptive digital atlases, in particular of the brainstem. Our approach proceeds from a series of numbered, contour-based sketches coinciding with slices of the brainstem featuring both closed and open contours. The closed contours coincide with functionally relevant regions, whereby our objective is to fill in each corresponding label, which is analogous to painting numbered regions in a paint-by-numbers kit. Any open contour typically coincides with a cranial nerve. This 2D phase is needed in order to produce densely labeled regions that can be stacked to produce 3D regions, as well as identifying the embedded paths and outer attachment points of cranial nerves. Cranial nerves are modeled using an explicit contour based technique called 1-Simplex. The relevance of cranial nerves modeling of this project is two-fold: i) this atlas will fill a void left by the brain segmentation communities, as no suitable digital atlas of the brainstem exists, and ii) this atlas is necessary to make explicit the attachment points of major nerves (except I and II) having a cranial origin. Keywords: digital atlas, contour models, surface models

  4. Optimizing Biorefinery Design and Operations via Linear Programming Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LPmore » models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for maximizing the potential benefits of biomass utilization for production of fuels, chemicals and power.« less

  5. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    NASA Astrophysics Data System (ADS)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  6. Building Excellence in Project Execution: Integrated Project Management

    DTIC Science & Technology

    2015-04-30

    challenge by adopting and refining the CMMI Model and building the tenets of integrated project management (IPM) into project planning and execution...Systems Center Pacific (SSC Pacific) is addressing this challenge by adopting and refining the CMMI Model, and building the tenets of integrated project...successfully managing stakeholder expectations and meeting requirements. Under the Capability Maturity Model Integration ( CMMI ), IPM is defined as

  7. The Second Prototype of the Development of a Technological Pedagogical Content Knowledge Based Instructional Design Model: An Implementation Study in a Technology Integration Course

    ERIC Educational Resources Information Center

    Lee, Chia-Jung; Kim, ChanMin

    2014-01-01

    This study presents a refined technological pedagogical content knowledge (also known as TPACK) based instructional design model, which was revised using findings from the implementation study of a prior model. The refined model was applied in a technology integration course with 38 preservice teachers. A case study approach was used in this…

  8. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  9. Microstructural Evolution of Al-1Fe (Weight Percent) Alloy During Accumulative Continuous Extrusion Forming

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Guan, Ren-Guo; Tie, Di; Shang, Ying-Qiu; Jin, Hong-Mei; Li, Hong-Chao

    2018-04-01

    As a new microstructure refining method, accumulative continuous extrusion forming (ACEF) cannot only refine metal matrix but also refine the phases that exist in it. In order to detect the refinements of grain and second phase during the process, Al-1Fe (wt pct) alloy was processed by ACEF, and the microstructural evolution was analyzed by electron backscatter diffraction (EBSD) and transmission electron microscopy (TEM). Results revealed that the average grain size of Al-1Fe (wt pct) alloy decreased from 13 to 1.2 μm, and blocky Al3Fe phase with an average length of 300 nm was granulated to Al3Fe particle with an average diameter of 200 nm, after one pass of ACEF. Refinement of grain was attributed to continuous dynamic recrystallization (CDRX), and the granulation of Al3Fe phase included the spheroidization resulting from deformation heat and the fragmentation caused by the coupling effects of strain and thermal effect. The spheroidization worked in almost the entire deformation process, while the fragmentation required strain accumulation. However, fragmentation contributed more than spheroidization. Al3Fe particle stimulated the formation of substructure and retarded the migration of recrystallized grain boundary, but the effect of Al3Fe phase on refinement of grain could only be determined by the contrastive investigation of Al-1Fe (wt pct) alloy and pure Al.

  10. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  11. Increasing the Cryogenic Toughness of Steels

    NASA Technical Reports Server (NTRS)

    Rush, H. F.

    1986-01-01

    Grain-refining heat treatments increase toughness without substantial strength loss. Five alloys selected for study, all at or near technological limit. Results showed clearly grain sizes of these alloys refined by such heat treatments and grain refinement results in large improvement in toughness without substantial loss in strength. Best improvements seen in HP-9-4-20 Steel, at low-strength end of technological limit, and in Maraging 200, at high-strength end. These alloys, in grain refined condition, considered for model applications in high-Reynolds-number cryogenic wind tunnels.

  12. 3Drefine: an interactive web server for efficient protein structure refinement.

    PubMed

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With the renewed interest in Cartesian gridding methodologies for the ease and speed of gridding complex geometries in addition to the simplicity of the control volumes used in the computations, it has become important to investigate ways of extending the existing Cartesian grid solver functionalities. This includes developing methods of modeling the viscous effects in order to utilize Cartesian grids solvers for accurate drag predictions and addressing the issues related to the distributed memory parallelization of Cartesian solvers. This research presents advances in two areas of interest in Cartesian grid solvers, viscous effects modeling and MPI parallelization. The development of viscous effects modeling using solely Cartesian grids has been hampered by the widely varying control volume sizes associated with the mesh refinement and the cut cells associated with the solid surface. This problem is being addressed by using physically based modeling techniques to update the state vectors of the cut cells and removing them from the finite volume integration scheme. This work is performed on a new Cartesian grid solver, NASCART-GT, with modifications to its cut cell functionality. The development of MPI parallelization addresses issues associated with utilizing Cartesian solvers on distributed memory parallel environments. This work is performed on an existing Cartesian grid solver, CART3D, with modifications to its parallelization methodology.

  14. Acute, subchronic, and developmental toxicological properties of lubricating oil base stocks.

    PubMed

    Dalbey, Walden E; McKee, Richard H; Goyak, Katy Olsavsky; Biles, Robert W; Murray, Jay; White, Russell

    2014-01-01

    Lubricating oil base stocks (LOBs) are substances used in the manufacture of finished lubricants and greases. They are produced from residue remaining after atmospheric distillation of crude oil that is subsequently fractionated by vacuum distillation and additional refining steps. Initial LOB streams that have been produced by vacuum distillation but not further refined may contain polycyclic aromatic compounds (PACs) and may present carcinogenic hazards. In modern refineries, LOBs are further refined by multistep processes including solvent extraction and/or hydrogen treatment to reduce the levels of PACs and other undesirable constituents. Thus, mildly (insufficiently) refined LOBs are potentially more hazardous than more severely (sufficiently) refined LOBs. This article discusses the evaluation of LOBs using statistical models based on content of PACs; these models indicate that insufficiently refined LOBs (potentially carcinogenic LOBs) can also produce systemic and developmental effects with repeated dermal exposure. Experimental data were also obtained in ten 13-week dermal studies in rats, eight 4-week dermal studies in rabbits, and seven dermal developmental toxicity studies with sufficiently refined LOBs (noncarcinogenic and commonly marketed) in which no observed adverse effect levels for systemic toxicity and developmental toxicity were 1000 to 2000 mg/kg/d with dermal exposures, typically the highest dose tested. Results in both oral and inhalation developmental toxicity studies were similar. This absence of toxicologically relevant findings was consistent with lower PAC content of sufficiently refined LOBs. Based on data on reproductive organs with repeated dosing and parameters in developmental toxicity studies, sufficiently refined LOBs are likely to have little, if any, effect on reproductive parameters.

  15. A hydrological model for interprovincial water resource planning and management: A case study in the Long Xuyen Quadrangle, Mekong Delta, Vietnam

    NASA Astrophysics Data System (ADS)

    Hanington, Peter; To, Quang Toan; Van, Pham Dang Tri; Doan, Ngoc Anh Vu; Kiem, Anthony S.

    2017-04-01

    In this paper we present the results of the development and calibration of a fine-scaled quasi-2D hydrodynamic model (IWRM-LXQ) for the Long Xuyen Quadrangle - an important interprovincial agricultural region in the Vietnamese Mekong Delta. We use the Long Xuyen Quadrangle as a case study to highlight the need for further investment in hydrodynamic modelling at scales relevant to the decisions facing water resource managers and planners in the Vietnamese Mekong Delta. The IWRM-LXQ was calibrated using existing data from a low flood year (2010) and high flood year (2011), including dry season and wet season flows. The model performed well in simulating low flood and high flood events in both dry and wet seasons where good spatial and temporal data exists. However, our study shows that there are data quality issues and key data gaps that need to be addressed before the model can be further refined, validated and then used for decision making. The development of the IWRM-LXQ is timely, as significant investments in land and water resource development and infrastructure are in planning for the Vietnamese Mekong Delta. In order to define the scope of such investments and their feasibility, models such as the IWRM-LXQ are an essential tool to provide objective assessment of investment options and build stakeholder consensus around potentially contentious development decisions.

  16. Mathematical modelling and quantitative methods.

    PubMed

    Edler, L; Poirier, K; Dourson, M; Kleiner, J; Mileson, B; Nordmann, H; Renwick, A; Slob, W; Walton, K; Würtzen, G

    2002-01-01

    The present review reports on the mathematical methods and statistical techniques presently available for hazard characterisation. The state of the art of mathematical modelling and quantitative methods used currently for regulatory decision-making in Europe and additional potential methods for risk assessment of chemicals in food and diet are described. Existing practices of JECFA, FDA, EPA, etc., are examined for their similarities and differences. A framework is established for the development of new and improved quantitative methodologies. Areas for refinement, improvement and increase of efficiency of each method are identified in a gap analysis. Based on this critical evaluation, needs for future research are defined. It is concluded from our work that mathematical modelling of the dose-response relationship would improve the risk assessment process. An adequate characterisation of the dose-response relationship by mathematical modelling clearly requires the use of a sufficient number of dose groups to achieve a range of different response levels. This need not necessarily lead to an increase in the total number of animals in the study if an appropriate design is used. Chemical-specific data relating to the mode or mechanism of action and/or the toxicokinetics of the chemical should be used for dose-response characterisation whenever possible. It is concluded that a single method of hazard characterisation would not be suitable for all kinds of risk assessments, and that a range of different approaches is necessary so that the method used is the most appropriate for the data available and for the risk characterisation issue. Future refinements to dose-response characterisation should incorporate more clearly the extent of uncertainty and variability in the resulting output.

  17. Variable strategy model of the human operator

    NASA Astrophysics Data System (ADS)

    Phillips, John Michael

    Human operators often employ discontinuous or "bang-bang" control strategies when performing large-amplitude acquisition tasks. The current study applies Variable Structure Control (VSC) techniques to model human operator behavior during acquisition tasks. The result is a coupled, multi-input model replicating the discontinuous control strategy. In the VSC formulation, a switching surface is the mathematical representation of the operator's control strategy. The performance of the Variable Strategy Model (VSM) is evaluated by considering several examples, including the longitudinal control of an aircraft during the visual landing task. The aircraft landing task becomes an acquisition maneuver whenever large initial offsets occur. Several different strategies are explored in the VSM formulation for the aircraft landing task. First, a switching surface is constructed from literal interpretations of pilot training literature. This approach yields a mathematical representation of how a pilot is trained to fly a generic aircraft. This switching surface is shown to bound the trajectory response of a group of pilots performing an offset landing task in an aircraft simulator study. Next, front-side and back-side landing strategies are compared. A back-side landing strategy is found to be capable of landing an aircraft flying on either the front side or back side of the power curve. However, the front-side landing strategy is found to be insufficient for landing an aircraft flying on the back side. Finally, a more refined landing strategy is developed that takes into the account the specific aircraft's dynamic characteristics. The refined strategy is translated back into terminology similar to the existing pilot training literature.

  18. Improving virtual screening of G protein-coupled receptors via ligand-directed modeling

    PubMed Central

    Simms, John; Christopoulos, Arthur; Wootten, Denise

    2017-01-01

    G protein-coupled receptors (GPCRs) play crucial roles in cell physiology and pathophysiology. There is increasing interest in using structural information for virtual screening (VS) of libraries and for structure-based drug design to identify novel agonist or antagonist leads. However, the sparse availability of experimentally determined GPCR/ligand complex structures with diverse ligands impedes the application of structure-based drug design (SBDD) programs directed to identifying new molecules with a select pharmacology. In this study, we apply ligand-directed modeling (LDM) to available GPCR X-ray structures to improve VS performance and selectivity towards molecules of specific pharmacological profile. The described method refines a GPCR binding pocket conformation using a single known ligand for that GPCR. The LDM method is a computationally efficient, iterative workflow consisting of protein sampling and ligand docking. We developed an extensive benchmark comparing LDM-refined binding pockets to GPCR X-ray crystal structures across seven different GPCRs bound to a range of ligands of different chemotypes and pharmacological profiles. LDM-refined models showed improvement in VS performance over origin X-ray crystal structures in 21 out of 24 cases. In all cases, the LDM-refined models had superior performance in enriching for the chemotype of the refinement ligand. This likely contributes to the LDM success in all cases of inhibitor-bound to agonist-bound binding pocket refinement, a key task for GPCR SBDD programs. Indeed, agonist ligands are required for a plethora of GPCRs for therapeutic intervention, however GPCR X-ray structures are mostly restricted to their inactive inhibitor-bound state. PMID:29131821

  19. MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models

    NASA Astrophysics Data System (ADS)

    Ullrich, P.; Jablonowski, C.

    2011-12-01

    The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.

  20. A methodology for quadrilateral finite element mesh coarsening

    DOE PAGES

    Staten, Matthew L.; Benzley, Steven; Scott, Michael

    2008-03-27

    High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less

  1. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  2. Boomerang: A method for recursive reclassification.

    PubMed

    Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat

    2016-09-01

    While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.

  3. Boomerang: A Method for Recursive Reclassification

    PubMed Central

    Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat

    2016-01-01

    Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051

  4. Partial unfolding and refolding for structure refinement: A unified approach of geometric simulations and molecular dynamics.

    PubMed

    Kumar, Avishek; Campitelli, Paul; Thorpe, M F; Ozkan, S Banu

    2015-12-01

    The most successful protein structure prediction methods to date have been template-based modeling (TBM) or homology modeling, which predicts protein structure based on experimental structures. These high accuracy predictions sometimes retain structural errors due to incorrect templates or a lack of accurate templates in the case of low sequence similarity, making these structures inadequate in drug-design studies or molecular dynamics simulations. We have developed a new physics based approach to the protein refinement problem by mimicking the mechanism of chaperons that rehabilitate misfolded proteins. The template structure is unfolded by selectively (targeted) pulling on different portions of the protein using the geometric based technique FRODA, and then refolded using hierarchically restrained replica exchange molecular dynamics simulations (hr-REMD). FRODA unfolding is used to create a diverse set of topologies for surveying near native-like structures from a template and to provide a set of persistent contacts to be employed during re-folding. We have tested our approach on 13 previous CASP targets and observed that this method of folding an ensemble of partially unfolded structures, through the hierarchical addition of contact restraints (that is, first local and then nonlocal interactions), leads to a refolding of the structure along with refinement in most cases (12/13). Although this approach yields refined models through advancement in sampling, the task of blind selection of the best refined models still needs to be solved. Overall, the method can be useful for improved sampling for low resolution models where certain of the portions of the structure are incorrectly modeled. © 2015 Wiley Periodicals, Inc.

  5. Incremental triangulation by way of edge swapping and local optimization

    NASA Technical Reports Server (NTRS)

    Wiltberger, N. Lyn

    1994-01-01

    This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.

  6. Thermal modeling of grinding for process optimization and durability improvements

    NASA Astrophysics Data System (ADS)

    Hanna, Ihab M.

    Both thermal and mechanical aspects of the grinding process are investigated in detail in an effort to predict grinding induced residual stresses. An existing thermal model is used as a foundation for computing heat partitions and temperatures in surface grinding. By numerically processing data from IR temperature measurements of the grinding zone; characterizations are made of the grinding zone heat flux. It is concluded that the typical heat flux profile in the grinding zone is triangular in shape, supporting this often used assumption found in the literature. Further analyses of the computed heat flux profiles has revealed that actual grinding zone contact lengths exceed geometric contact lengths by an average of 57% for the cases considered. By integrating the resulting heat flux profiles; workpiece energy partitions are computed for several cases of dry conventional grinding of hardened steel. The average workpiece energy partition for the cases considered was 37%. In an effort to more accurately predict grinding zone temperatures and heat fluxes, refinements are made to the existing thermal model. These include consideration of contact length extensions due to local elastic deformations, variations of the assumed contact area ratio as a function of grinding process parameters, consideration of coolant latent heat of vaporization and its effect on heat transfer beyond the coolant boiling point, and incorporation of coolant-workpiece convective heat flux effects outside the grinding zone. The result of the model refinements accounting for contact length extensions and process-dependant contact area ratios is excellent agreement with IR temperature measurements over a wide range of grinding conditions. By accounting for latent heat of vaporization effects, grinding zone temperature profiles are shown to be capable of reproducing measured profiles found in the literature for cases on the verge of thermal surge conditions. Computed peak grinding zone temperatures for the aggressive grinding examples given are 30--50% lower than those computed using the existing thermal model formulation. By accounting for convective heat transfer effects outside the grinding zone, it is shown that while surface temperatures in the wake of the grinding zone may be significantly affected under highly convective conditions, computed residual stresses are less sensitive to such conditions. Numerical models are used to evaluate both thermally and mechanically induced stress fields in an elastic workpiece, while finite element modeling is used to evaluate residual stresses for workpieces with elastic-plastic material properties. Modeling of mechanical interactions at the local grit-workpiece length scale is used to create the often measured effect of compressive surface residual stress followed by a subsurface tensile peak. The model is shown to be capable of reproducing trends found in the literature of surface residual stresses which are compressive for low temperature grinding conditions, with surface stresses increasing linearly and becoming tensile with increasing temperatures. Further modifications to the finite element model are made to allow for transiently varying inputs for more complicated grinding processes of industrial components such as automotive cam lobes.

  7. From deep TLS validation to ensembles of atomic models built from elemental motions. II. Analysis of TLS refinement results by explicit interpretation

    DOE PAGES

    Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre

    2018-06-08

    TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less

  8. Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps.

    PubMed

    Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus

    2016-07-07

    Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services.

  9. From deep TLS validation to ensembles of atomic models built from elemental motions. II. Analysis of TLS refinement results by explicit interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Adams, Paul D.; Urzhumtsev, Alexandre

    TLS modelling was developed by Schomaker and Trueblood to describe atomic displacement parameters through concerted (rigid-body) harmonic motions of an atomic group [Schomaker & Trueblood (1968), Acta Cryst. B 24 , 63–76]. The results of a TLS refinement are T , L and S matrices that provide individual anisotropic atomic displacement parameters (ADPs) for all atoms belonging to the group. These ADPs can be calculated analytically using a formula that relates the elements of the TLS matrices to atomic parameters. Alternatively, ADPs can be obtained numerically from the parameters of concerted atomic motions corresponding to the TLS matrices. Both proceduresmore » are expected to produce the same ADP values and therefore can be used to assess the results of TLS refinement. Here, the implementation of this approach in PHENIX is described and several illustrations, including the use of all models from the PDB that have been subjected to TLS refinement, are provided.« less

  10. An international road map to improve pain assessment in people with impaired cognition: the development of the Pain Assessment in Impaired Cognition (PAIC) meta-tool.

    PubMed

    Corbett, Anne; Achterberg, Wilco; Husebo, Bettina; Lobbezoo, Frank; de Vet, Henrica; Kunz, Miriam; Strand, Liv; Constantinou, Marios; Tudose, Catalina; Kappesser, Judith; de Waal, Margot; Lautenbacher, Stefan

    2014-12-10

    Pain is common in people with dementia, yet identification is challenging. A number of pain assessment tools exist, utilizing observation of pain-related behaviours, vocalizations and facial expressions. Whilst they have been developed robustly, these often lack sufficient evidence of psychometric properties, like reliability, face and construct validity, responsiveness and usability, and are not internationally implemented. The EU-COST initiative "Pain in impaired cognition, especially dementia" aims to combine the expertise of clinicians and researchers to address this important issue by building on previous research in the area, identifying existing pain assessment tools for dementia, and developing consensus for items for a new universal meta-tool for use in research and clinical settings. This paper reports on the initial phase of this collaboration task. All existing observational pain behaviour tools were identified and elements categorised using a three-step reduction process. Selection and refinement of items for the draft Pain Assessment in Impaired Cognition (PAIC) meta-tool was achieved through scrutiny of the evidence, consensus of expert opinion, frequency of use and alignment with the American Geriatric Society guidelines. The main aim of this process was to identify key items with potential empirical, rather than theoretical value to take forward for testing. 12 eligible assessment tools were identified, and pain items categorised according to behaviour, facial expression and vocalisation according to the AGS guidelines (Domains 1 - 3). This has been refined to create the PAIC meta-tool for validation and further refinement. A decision was made to create a supporting comprehensive toolkit to support the core assessment tool to provide additional resources for the assessment of overlapping symptoms in dementia, including AGS domains four to six, identification of specific types of pain and assessment of duration and location of pain. This multidisciplinary, cross-cultural initiative has created a draft meta-tool for capturing pain behaviour to be used across languages and culture, based on the most promising items used in existing tools. The draft PAIC meta-tool will now be taken forward for evaluation according to COSMIN guidelines and the EU-COST protocol in order to exclude invalid items, refine included items and optimise the meta-tool.

  11. Model-based high-throughput design of ion exchange protein chromatography.

    PubMed

    Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo

    2016-08-12

    This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Using Induction to Refine Information Retrieval Strategies

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Pell, Barney; Kedar, Smadar

    1994-01-01

    Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.

  13. Refining As-cast β-Ti Grains Through ZrN Inoculation

    NASA Astrophysics Data System (ADS)

    Qiu, Dong; Zhang, Duyao; Easton, Mark A.; St John, David H.; Gibson, Mark A.

    2018-03-01

    The columnar-to-equiaxed transition and remarkable refinement of β-Ti grains occur in an as-cast Ti-13Mo alloy when a new grain refiner, ZrN, was inoculated at a nitrogen level as low as 0.4 wt pct. The grain refining effect is attributed to in situ-formed TiN particles that provide active nucleation sites and solute Zr that promotes constitutional supercooling. Reproducible orientation relationships were identified between TiN nucleants and β-Ti matrix, and well explained by the edge-to-edge matching model.

  14. On the temperature dependence of H-U{sub iso} in the riding hydrogen model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lübben, Jens; Volkmann, Christian; Grabowsky, Simon

    The temperature dependence of hydrogen U{sub iso} and parent U{sub eq} in the riding hydrogen model is investigated by neutron diffraction, aspherical-atom refinements and QM/MM and MO/MO cluster calculations. Fixed values of 1.2 or 1.5 appear to be underestimated, especially at temperatures below 100 K. The temperature dependence of H-U{sub iso} in N-acetyl-l-4-hydroxyproline monohydrate is investigated. Imposing a constant temperature-independent multiplier of 1.2 or 1.5 for the riding hydrogen model is found to be inaccurate, and severely underestimates H-U{sub iso} below 100 K. Neutron diffraction data at temperatures of 9, 150, 200 and 250 K provide benchmark results for thismore » study. X-ray diffraction data to high resolution, collected at temperatures of 9, 30, 50, 75, 100, 150, 200 and 250 K (synchrotron and home source), reproduce neutron results only when evaluated by aspherical-atom refinement models, since these take into account bonding and lone-pair electron density; both invariom and Hirshfeld-atom refinement models enable a more precise determination of the magnitude of H-atom displacements than independent-atom model refinements. Experimental efforts are complemented by computing displacement parameters following the TLS+ONIOM approach. A satisfactory agreement between all approaches is found.« less

  15. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    NASA Technical Reports Server (NTRS)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  16. Initiating technical refinements in high-level golfers: Evidence for contradictory procedures.

    PubMed

    Carson, Howie J; Collins, Dave; Richards, Jim

    2016-01-01

    When developing motor skills there are several outcomes available to an athlete depending on their skill status and needs. Whereas the skill acquisition and performance literature is abundant, an under-researched outcome relates to the refinement of already acquired and well-established skills. Contrary to current recommendations for athletes to employ an external focus of attention and a representative practice design,  Carson and  Collins' (2011) [Refining and regaining skills in fixation/diversification stage performers: The Five-A Model. International Review of Sport and Exercise Psychology, 4, 146-167. doi: 10.1080/1750984x.2011.613682 ] Five-A Model requires an initial narrowed internal focus on the technical aspect needing refinement: the implication being that environments which limit external sources of information would be beneficial to achieving this task. Therefore, the purpose of this paper was to (1) provide a literature-based explanation for why techniques counter to current recommendations may be (temporarily) appropriate within the skill refinement process and (2) provide empirical evidence for such efficacy. Kinematic data and self-perception reports are provided from high-level golfers attempting to consciously initiate technical refinements while executing shots onto a driving range and into a close proximity net (i.e. with limited knowledge of results). It was hypothesised that greater control over intended refinements would occur when environmental stimuli were reduced in the most unrepresentative practice condition (i.e. hitting into a net). Results confirmed this, as evidenced by reduced intra-individual movement variability for all participants' individual refinements, despite little or no difference in mental effort reported. This research offers coaches guidance when working with performers who may find conscious recall difficult during the skill refinement process.

  17. KoBaMIN: a knowledge-based minimization web server for protein structure refinement.

    PubMed

    Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav

    2012-07-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.

  18. Separation of Cs and Sr from LiCl-KCl eutectic salt via a zone-refining process for pyroprocessing waste salt minimization

    NASA Astrophysics Data System (ADS)

    Shim, Moonsoo; Choi, Ho-Gil; Choi, Jeong-Hun; Yi, Kyung-Woo; Lee, Jong-Hyeon

    2017-08-01

    The purification of a LiCl-KCl salt mixture was carried out by a zone-refining process. To improve the throughput of zone refining, three heaters were installed in the zone refiner. The zone-refining method was used to grow pure LiCl-KCl salt ingots from a LiCl-KCl-CsCl-SrCl2 salt mixture. The main investigated parameters were the heater speed and the number of passes. From each zone-refined salt ingot, samples were collected axially along the salt ingot and the concentrations of Sr and Cs were determined. Experimental results show that the Sr and Cs concentrations at the initial region of the ingot were low and increased to a maximum at the final freezing region of the salt ingot. Concentration results of the zone-refined salt were compared with theoretical results furnished by the proposed model to validate its predictions. The keff values for Sr and Cs were 0.55 and 0.47, respectively. The correlation between the salt composition and separation behavior was also investigated. The keff values of the Sr in LiCl-KCl-SrCl2 and the Cs in LiCl-KCl-CsCl were found to be 0.53 and 0.44, respectively, by fitting the experimental data into the proposed model.

  19. Improved ligand geometries in crystallographic refinement using AFITT in PHENIX

    DOE PAGES

    Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; ...

    2016-08-31

    Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows.PHENIX–AFITTrefinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentiallymore » difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows.PHENIX–AFITTrefinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combiningAFITTand thePHENIXsoftware suite on a data set of 189 protein–ligand PDB structures are presented. Refinements usingPHENIX–AFITTsignificantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. Finally, for the data presented,PHENIX–AFITTrefinements result in more chemically accurate models for small-molecule ligands.« less

  20. Use of Molecular Dynamics for the Refinement of an Electrostatic Model for the In Silico Design of a Polymer Antidote for the Anticoagulant Fondaparinux

    PubMed Central

    Kwok, Ezra; Gopaluni, Bhushan; Kizhakkedathu, Jayachandran N.

    2013-01-01

    Molecular dynamics (MD) simulations results are herein incorporated into an electrostatic model used to determine the structure of an effective polymer-based antidote to the anticoagulant fondaparinux. In silico data for the polymer or its cationic binding groups has not, up to now, been available, and experimental data on the structure of the polymer-fondaparinux complex is extremely limited. Consequently, the task of optimizing the polymer structure is a daunting challenge. MD simulations provided a means to gain microscopic information on the interactions of the binding groups and fondaparinux that would have otherwise been inaccessible. This was used to refine the electrostatic model and improve the quantitative model predictions of binding affinity. Once refined, the model provided guidelines to improve electrostatic forces between candidate polymers and fondaparinux in order to increase association rate constants. PMID:27006916

  1. Experimental and analytical research on the aerodynamics of wind driven turbines. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrbach, C.; Wainauski, H.; Worobel, R.

    1977-12-01

    This aerodynamic research program was aimed at providing a reliable, comprehensive data base on a series of wind turbine models covering a broad range of the prime aerodynamic and geometric variables. Such data obtained under controlled laboratory conditions on turbines designed by the same method, of the same size, and tested in the same wind tunnel had not been available in the literature. Moreover, this research program was further aimed at providing a basis for evaluating the adequacy of existing wind turbine aerodynamic design and performance methodology, for assessing the potential of recent advanced theories and for providing a basismore » for further method development and refinement.« less

  2. Absolute frequency atlas from 915 nm to 985 nm based on laser absorption spectroscopy of iodine

    NASA Astrophysics Data System (ADS)

    Nölleke, Christian; Raab, Christoph; Neuhaus, Rudolf; Falke, Stephan

    2018-04-01

    This article reports on laser absorption spectroscopy of iodine gas between 915 nm and 985 nm. This wavelength range is scanned utilizing a narrow linewidth and mode-hop-free tunable diode-laser whose frequency is actively controlled using a calibrated wavelength meter. This allows us to provide an iodine atlas that contains almost 10,000 experimentally observed reference lines with an uncertainty of 50 MHz. For common lines, good agreement is found with a publication by Gerstenkorn and Luc (1978). The new rich dataset allows existing models of the iodine molecule to be refined and can serve as a reference for laser frequency calibration and stabilization.

  3. Bellows flow-induced vibrations

    NASA Technical Reports Server (NTRS)

    Tygielski, P. J.; Smyly, H. M.; Gerlach, C. R.

    1983-01-01

    The bellows flow excitation mechanism and results of comprehensive test program are summarized. The analytical model for predicting bellows flow induced stress is refined. The model includes the effects of an upstream elbow, arbitrary geometry, and multiple piles. A refined computer code for predicting flow induced stress is described which allows life prediction if a material S-N diagram is available.

  4. Evaluation of model predictions of the ecological effects of 4-nonylphenol -- before and after model refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanratty, M.P.; Liber, K.

    1994-12-31

    The Littoral Ecosystem Risk Assessment Model (LERAM) is a bioenergetic ecosystem effects model. It links single species toxicity data to a bioenergetic model of the trophic structure of an ecosystem in order to simulate community and ecosystem level effects of chemical stressors. LERAM was used in 1992 to simulate the ecological effects of diflubenzuron. When compared to the results from a littoral enclosure study, the model exaggerated the cascading of effects through the trophic levels of the littoral ecosystem. It was hypothesized that this could be corrected by making minor changes in the representation of the littoral food web. Twomore » refinements of the model were therefore performed: (1) the plankton and macroinvertebrate model populations [eg., predatory Copepoda, herbivorous Insecta, green phytoplankton, etc.] were changed to better represent the habitat and feeding preferences of the endemic taxa; and (2) the method for modeling the microbial degradation of detritus (and the resulting nutrient remineralization) was changed from simulating bacterial populations to simulating bacterial function. Model predictions of the ecological effects of 4-nonylphenol were made before and after these refinements. Both sets of predictions were then compared to the results from a littoral enclosure study of the ecological effects of 4-nonylphenol. The changes in the LERAM predictions were then used to determine the success of the refinements, to guide. future research, and to further define LERAM`s domain of application.« less

  5. Simplified and refined structural modeling for economical flutter analysis and design

    NASA Technical Reports Server (NTRS)

    Ricketts, R. H.; Sobieszczanski, J.

    1977-01-01

    A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.

  6. One technique for refining the global Earth gravity models

    NASA Astrophysics Data System (ADS)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  7. MODFLOW-LGR-Modifications to the streamflow-routing package (SFR2) to route streamflow through locally refined grids

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.

  8. 78 FR 69820 - Seamless Refined Copper Pipe and Tube From the People's Republic of China: Preliminary Results...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-21

    ... exist: Weighted- average Exporter dumping margin (percent) Golden Dragon Precise Copper Tube Group, Inc., Hong Kong 3.55 GD Trading Co., Ltd., and Golden Dragon Holding (Hong Kong) International, Ltd Hong Kong...

  9. Aviation Safety: Opportunities Exist for FAA to Refine the Controller Staffing Process

    DOT National Transportation Integrated Search

    1997-04-09

    The Federal Aviation Administration (FAA) is responsible for managing the : nation's air transportation system so more than 18,000 aircraft can annually : carry 500 million passengers safely and on schedule. Because of significant : hiring in the ear...

  10. Solar-System Tests of Gravitational Theories

    NASA Technical Reports Server (NTRS)

    Shapiro, Irwin I.

    2001-01-01

    We are engaged in testing gravitational theory, primarily using observations of objects in the solar system and primarily on that scale. Our goal is either to detect departures from the standard model (general relativity) - if any exist within the level of sensitivity of our data - or to place tighter bounds on such departures. For this project, we have analyzed a combination of observational data with our model of the solar system, including mostly planetary radar ranging, lunar laser ranging, and spacecraft tracking, but also including both pulsar timing and pulsar very long base interferometry (VLBI) measurements. This year, we have extended our model of Earth nutation with adjustable correction terms at the principal frequencies. We also refined our model of tidal drag on the Moon's orbit. We believe these changes will make no substantial changes in the results, but we are now repeating the analysis of the whole set of data to verify that belief. Additional information is contained in the original extended abstract.

  11. A Chimpanzee (Pan troglodytes) Model of Triarchic Psychopathy Constructs: Development and Initial Validation

    PubMed Central

    Latzman, Robert D.; Drislane, Laura E.; Hecht, Lisa K.; Brislin, Sarah J.; Patrick, Christopher J.; Lilienfeld, Scott O.; Freeman, Hani J.; Schapiro, Steven J.; Hopkins, William D.

    2015-01-01

    The current work sought to operationalize constructs of the triarchic model of psychopathy in chimpanzees (Pan troglodytes), a species well-suited for investigations of basic biobehavioral dispositions relevant to psychopathology. Across three studies, we generated validity evidence for scale measures of the triarchic model constructs in a large sample (N=238) of socially-housed chimpanzees. Using a consensus-based rating approach, we first identified candidate items for the chimpanzee triarchic (CHMP-Tri) scales from an existing primate personality instrument and refined these into scales. In Study 2, we collected data for these scales from human informants (N=301), and examined their convergent and divergent relations with scales from another triarchic inventory developed for human use. In Study 3, we undertook validation work examining associations between CHMP-Tri scales and task measures of approach-avoidance behavior (N=73) and ability to delay gratification (N=55). Current findings provide support for a chimpanzee model of core dispositions relevant to psychopathy and other forms of psychopathology. PMID:26779396

  12. Diet optimization methods can help translate dietary guidelines into a cancer prevention food plan.

    PubMed

    Masset, Gabriel; Monsivais, Pablo; Maillot, Matthieu; Darmon, Nicole; Drewnowski, Adam

    2009-08-01

    Mathematical diet optimization models are used to create food plans that best resemble current eating habits while meeting prespecified nutrition and cost constraints. This study used linear programming to generate food plans meeting the key 2007 dietary recommendations issued by the World Cancer Research Fund/American Institute of Cancer Research (WCRF/AICR). The models were constructed to minimize deviations in food intake between the observed and the WCRF/AICR-recommended diets. Consumption constraints were imposed to prevent food plans from including unreasonable amounts of food from a single group. Consumption norms for nutrients and food groups were taken from dietary intake data for a sample of adult men and women (n = 161) in the Pacific Northwest. Food plans meeting the WCRF/AICR dietary guidelines numbers 3-5 and 7 were lower in refined grains and higher in vegetables and fruits than the existing diets. For this group, achieving cancer prevention goals required little modification of existing diets and had minimal impact on diet quality and cost. By contrast, the need to meet all nutritional needs through diet alone (guideline no. 8) required a large food volume increase and dramatic shifts from the observed food intake patterns. Putting dietary guidelines into practice may require the creation of detailed food plans that are sensitive to existing consumption patterns and food costs. Optimization models provide an elegant mathematical solution that can help determine whether sets of dietary guidelines are achievable by diverse U.S. population subgroups.

  13. Simulation of the shallow groundwater-flow system near Mole Lake, Forest County, Wisconsin

    USGS Publications Warehouse

    Fienen, Michael N.; Juckem, Paul F.; Hunt, Randall J.

    2011-01-01

    The shallow groundwater system near Mole Lake, Forest County, Wis. was simulated using a previously calibrated regional model. The previous model was updated using newly collected water-level measurements and refinements to surface-water features. The updated model was then used to calculate the area contributing recharge for one existing and two proposed pumping locations on lands of the Sokaogon Chippewa Community. Delineated 1-, 5-, and 10-year areas contributing recharge for existing and proposed wells extend from the areas of pumping to the northeast of the pumping locations. Steady-state pumping was simulated for two scenarios: a base pumping scenario using pumping rates that reflect what the Tribe expects to pump and a high pumping scenario, in which the rate was set to the maximum expected from wells installed in this area. In the base pumping scenario, pumping rates of 32 gallons per minute (gal/min; 46,000 gallons per day (gal/d)) from the existing well and 30 gal/min (43,000 gal/d) at each of the two proposed wells were simulated. The high pumping scenario simulated a rate of 70 gal/min (101,000 gal/d) from each of the three pumping wells to estimate of the largest areas contributing recharge that might be expected given what is currently known about the shallow groundwater system. The areas contributing recharge for both the base and high pumping scenarios did not intersect any modeled surface-water bodies; however, the high pumping scenario had a larger areal extent than the base pumping scenario and intersected a septic separator.

  14. A Mathematical Model for Railway Control Systems

    NASA Technical Reports Server (NTRS)

    Hoover, D. N.

    1996-01-01

    We present a general method for modeling safety aspects of railway control systems. Using our modeling method, one can progressively refine an abstract railway safety model, sucessively adding layers of detail about how a real system actually operates, while maintaining a safety property that refines the original abstract safety property. This method supports a top-down approach to specification of railway control systems and to proof of a variety of safety-related properties. We demonstrate our method by proving safety of the classical block control system.

  15. Modeling of the Coupling of Microstructure and Macrosegregation in a Direct Chill Cast Al-Cu Billet

    NASA Astrophysics Data System (ADS)

    Heyvaert, Laurent; Bedel, Marie; Založnik, Miha; Combeau, Hervé

    2017-10-01

    The macroscopic multiphase flow and the growth of the solidification microstructures in the mushy zone of a direct chill (DC) casting are closely coupled. These couplings are the key to the understanding of the formation of the macrosegregation and of the non-uniform microstructure of the casting. In the present paper we use a multiphase and multiscale model to provide a fully coupled picture of the links between macrosegregation and microstructure in a DC cast billet. The model describes nucleation from inoculant particles and growth of dendritic and globular equiaxed crystal grains, fully coupled with macroscopic transport phenomena: fluid flow induced by natural convection and solidification shrinkage, heat, mass, and solute mass transport, motion of free-floating equiaxed grains, and of grain refiner particles. We compare our simulations to experiments on grain-refined and non-grain-refined industrial size billets from literature. We show that a transition between dendritic and globular grain morphology triggered by the grain refinement is the key to the explanation of the differences between the macrosegregation patterns in the two billets. We further show that the grain size and morphology are strongly affected by the macroscopic transport of free-floating equiaxed grains and of grain refiner particles.

  16. Modelling of upper ocean mixing by wave-induced turbulence

    NASA Astrophysics Data System (ADS)

    Ghantous, Malek; Babanin, Alexander

    2013-04-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  17. Transmission fidelity is the key to the build-up of cumulative culture

    PubMed Central

    Lewis, Hannah M.; Laland, Kevin N.

    2012-01-01

    Many animals have socially transmitted behavioural traditions, but human culture appears unique in that it is cumulative, i.e. human cultural traits increase in diversity and complexity over time. It is often suggested that high-fidelity cultural transmission is necessary for cumulative culture to occur through refinement, a process known as ‘ratcheting’, but this hypothesis has never been formally evaluated. We discuss processes of information transmission and loss of traits from a cognitive viewpoint alongside other cultural processes of novel invention (generation of entirely new traits), modification (refinement of existing traits) and combination (bringing together two established traits to generate a new trait). We develop a simple cultural transmission model that does not assume major evolutionary changes (e.g. in brain architecture) and show that small changes in the fidelity with which information is passed between individuals can lead to cumulative culture. In comparison, modification and combination have a lesser influence on, and novel invention appears unimportant to, the ratcheting process. Our findings support the idea that high-fidelity transmission is the key driver of human cumulative culture, and that progress in cumulative culture depends more on trait combination than novel invention or trait modification. PMID:22734060

  18. Verification and Validation of a Coordinate Transformation Method in Axisymmetric Transient Magnetics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.

    We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less

  19. Defining the role of a forensic hospital registered nurse using the Delphi method.

    PubMed

    Newman, Claire; Patterson, Karen; Eason, Michelle; Short, Ben

    2016-11-01

    A Delphi survey was undertaken to refine the position description of a registered nurse working in a forensic hospital, in New South Wales, Australia. Prior to commencing operation in 2008, position descriptions were developed from a review of legislation, as well as policies and procedures used by existing forensic mental health services in Australia. With an established workforce and an evolving model of care, a review of the initial registered nurse position description was required. An online Delphi survey was undertaken. Eight executive (88.9%) and 12 (58.3%) senior nursing staff participated in the first survey round. A total of four survey rounds were completed. At the final round, there was consensus (70%) that the revised position description was either very or somewhat suitable. There were a total of nine statements, from 31 originally produced in round 1, that did not reach consensus. The Delphi survey enabled a process for refining the Forensic Hospital registered nurse position description. Methods that facilitate executive and senior nursing staff consensus in the development and review of position descriptions should be considered in nursing management. © 2016 John Wiley & Sons Ltd.

  20. Transmission fidelity is the key to the build-up of cumulative culture.

    PubMed

    Lewis, Hannah M; Laland, Kevin N

    2012-08-05

    Many animals have socially transmitted behavioural traditions, but human culture appears unique in that it is cumulative, i.e. human cultural traits increase in diversity and complexity over time. It is often suggested that high-fidelity cultural transmission is necessary for cumulative culture to occur through refinement, a process known as 'ratcheting', but this hypothesis has never been formally evaluated. We discuss processes of information transmission and loss of traits from a cognitive viewpoint alongside other cultural processes of novel invention (generation of entirely new traits), modification (refinement of existing traits) and combination (bringing together two established traits to generate a new trait). We develop a simple cultural transmission model that does not assume major evolutionary changes (e.g. in brain architecture) and show that small changes in the fidelity with which information is passed between individuals can lead to cumulative culture. In comparison, modification and combination have a lesser influence on, and novel invention appears unimportant to, the ratcheting process. Our findings support the idea that high-fidelity transmission is the key driver of human cumulative culture, and that progress in cumulative culture depends more on trait combination than novel invention or trait modification.

  1. FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry

    NASA Astrophysics Data System (ADS)

    Dai, Jisheng; Liu, An; Lau, Vincent K. N.

    2018-05-01

    This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.

  2. The Environmental Protection Agency's Community-Focused Exposure and Risk Screening Tool (C-FERST) and its potential use for environmental justice efforts.

    PubMed

    Zartarian, Valerie G; Schultz, Bradley D; Barzyk, Timothy M; Smuts, Marybeth; Hammond, Davyda M; Medina-Vera, Myriam; Geller, Andrew M

    2011-12-01

    Our primary objective was to provide higher quality, more accessible science to address challenges of characterizing local-scale exposures and risks for enhanced community-based assessments and environmental decision-making. After identifying community needs, priority environmental issues, and current tools, we designed and populated the Community-Focused Exposure and Risk Screening Tool (C-FERST) in collaboration with stakeholders, following a set of defined principles, and considered it in the context of environmental justice. C-FERST is a geographic information system and resource access Web tool under development for supporting multimedia community assessments. Community-level exposure and risk research is being conducted to address specific local issues through case studies. C-FERST can be applied to support environmental justice efforts. It incorporates research to develop community-level data and modeled estimates for priority environmental issues, and other relevant information identified by communities. Initial case studies are under way to refine and test the tool to expand its applicability and transferability. Opportunities exist for scientists to address the many research needs in characterizing local cumulative exposures and risks and for community partners to apply and refine C-FERST.

  3. Filament capturing with the multimaterial moment-of-fluid method*

    DOE PAGES

    Jemison, Matthew; Sussman, Mark; Shashkov, Mikhail

    2015-01-15

    A novel method for capturing two-dimensional, thin, under-resolved material configurations, known as “filaments,” is presented in the context of interface reconstruction. This technique uses a partitioning procedure to detect disconnected regions of material in the advective preimage of a cell (indicative of a filament) and makes use of the existing functionality of the Multimaterial Moment-of-Fluid interface reconstruction method to accurately capture the under-resolved feature, while exactly conserving volume. An algorithm for Adaptive Mesh Refinement in the presence of filaments is developed so that refinement is introduced only near the tips of filaments and where the Moment-of-Fluid reconstruction error is stillmore » large. Comparison to the standard Moment-of-Fluid method is made. As a result, it is demonstrated that using filament capturing at a given resolution yields gains in accuracy comparable to introducing an additional level of mesh refinement at significantly lower cost.« less

  4. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    PubMed

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  5. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  6. Experimental psychiatric illness and drug abuse models: from human to animal, an overview.

    PubMed

    Edwards, Scott; Koob, George F

    2012-01-01

    Preclinical animal models have supported much of the recent rapid expansion of neuroscience research and have facilitated critical discoveries that undoubtedly benefit patients suffering from psychiatric disorders. This overview serves as an introduction for the following chapters describing both in vivo and in vitro preclinical models of psychiatric disease components and briefly describes models related to drug dependence and affective disorders. Although there are no perfect animal models of any psychiatric disorder, models do exist for many elements of each disease state or stage. In many cases, the development of certain models is essentially restricted to the human clinical laboratory domain for the purpose of maximizing validity, whereas the use of in vitro models may best represent an adjunctive, well-controlled means to model specific signaling mechanisms associated with psychiatric disease states. The data generated by preclinical models are only as valid as the model itself, and the development and refinement of animal models for human psychiatric disorders continues to be an important challenge. Collaborative relationships between basic neuroscience and clinical modeling could greatly benefit the development of new and better models, in addition to facilitating medications development.

  7. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  8. Diffraction-geometry refinement in the DIALS framework

    DOE PAGES

    Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...

    2016-03-30

    Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less

  9. Numerical modelling of surface waves generated by low frequency electromagnetic field for silicon refinement process

    NASA Astrophysics Data System (ADS)

    Geža, V.; Venčels, J.; Zāģeris, Ģ.; Pavlovs, S.

    2018-05-01

    One of the most perspective methods to produce SoG-Si is refinement via metallurgical route. The most critical part of this route is refinement from boron and phosphorus, therefore, approach under development will address this problem. An approach of creating surface waves on silicon melt’s surface is proposed in order to enlarge its area and accelerate removal of boron via chemical reactions and evaporation of phosphorus. A two dimensional numerical model is created which include coupling of electromagnetic and fluid dynamic simulations with free surface dynamics. First results show behaviour similar to experimental results from literature.

  10. Research-Based Program Development: Refining the Service Model for a Geographic Alliance

    ERIC Educational Resources Information Center

    Rutherford, David J.; Lovorn, Carley

    2018-01-01

    Research conducted in 2013 identified the perceptions that K-12 teachers and administrators hold with respect to: (1) the perceived needs in education, (2) the professional audiences that are most important to reach, and (3) the service models that are most effective. The specific purpose of the research was to refine and improve the services that…

  11. A method to estimate statistical errors of properties derived from charge-density modelling

    PubMed Central

    Lecomte, Claude

    2018-01-01

    Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964

  12. Refinement of Out of Circularity and Thickness Measurements of a Cylinder for Finite Element Analysis

    DTIC Science & Technology

    2016-09-01

    UNCLASSIFIED UNCLASSIFIED Refinement of Out of Circularity and Thickness Measurements of a Cylinder for Finite Element Analysis...significant effect on the collapse strength and must be accurately represented in finite element analysis to obtain accurate results. Often it is necessary...to interpolate measurements from a relatively coarse grid to a refined finite element model and methods that have wide general acceptance are

  13. Experimental determination of spin-dependent electron density by joint refinement of X-ray and polarized neutron diffraction data.

    PubMed

    Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed

    2012-11-01

    New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.

  14. The Story of Closely and Loosely Coupled Organisations.

    ERIC Educational Resources Information Center

    Plowman, Travis S.

    1998-01-01

    Examines five types of collegiate organizations (collegial, bureaucratic, political, anarchical, cybernetic) in terms of their interactiveness within closely and loosely coupled organizations. The terminology of closely and loosely coupled organizations is examined and existing definitions are refined. Examples are drawn from contemporary…

  15. Wilderness campsite monitoring methods: a sourcebook

    Treesearch

    David N. Cole

    1989-01-01

    Summarizes information on techniques available for monitoring the condition of campsites, particularly those in wilderness. A variety of techniques are described and evaluated; sources of information are also listed. Problems with existing monitoring systems and places where refinement of technique is required are highlighted.

  16. Ames Stereo Pipeline for Operation IceBridge

    NASA Astrophysics Data System (ADS)

    Beyer, R. A.; Alexandrov, O.; McMichael, S.; Fong, T.

    2017-12-01

    We are using the NASA Ames Stereo Pipeline to process Operation IceBridge Digital Mapping System (DMS) images into terrain models and to align them with the simultaneously acquired LIDAR data (ATM and LVIS). The expected outcome is to create a contiguous, high resolution terrain model for each flight that Operation IceBridge has flown during its eight year history of Arctic and Antarctic flights. There are some existing terrain models in the NSIDC repository that cover 2011 and 2012 (out of the total period of 2009 to 2017), which were made with the Agisoft Photoscan commercial software. Our open-source stereo suite has been verified to create terrains of similar quality. The total number of images we expect to process is around 5 million. There are numerous challenges with these data: accurate determination and refinement of camera pose when the images were acquired based on data logged during the flights and/or using information from existing orthoimages, aligning terrains with little or no features, images containing clouds, JPEG artifacts in input imagery, inconsistencies in how data was acquired/archived over the entire period, not fully reliable camera calibration files, and the sheer amount of data. We will create the majority of terrain models at 40 cm/pixel with a vertical precision of 10 to 20 cm. In some circumstances when the aircraft was flying higher than usual, those values will get coarser. We will create orthoimages at 10 cm/pixel (with the same caveat that some flights are at higher altitudes). These will differ from existing orthoimages by using the underlying terrain we generate rather than some pre-existing very low-resolution terrain model that may differ significantly from what is on the ground at the time of IceBridge acquisition.The results of this massive processing will be submitted to the NSIDC so that cryosphere researchers will be able to use these data for their investigations.

  17. Refined views of multi-protein complexes in the erythrocyte membrane

    PubMed Central

    Mankelow, TJ; Satchwell, TJ; Burton, NM

    2015-01-01

    The erythrocyte membrane has been extensively studied, both as a model membrane system and to investigate its role in gas exchange and transport. Much is now known about the protein components of the membrane, how they are organised into large multi-protein complexes and how they interact with each other within these complexes. Many links between the membrane and the cytoskeleton have also been delineated and have been demonstrated to be crucial for maintaining the deformability and integrity of the erythrocyte. In this study we have refined previous, highly speculative molecular models of these complexes by including the available data pertaining to known protein-protein interactions. While the refined models remain highly speculative, they provide an evolving framework for visualisation of these important cellular structures at the atomic level. PMID:22465511

  18. Range pattern matching with layer operations and continuous refinements

    NASA Astrophysics Data System (ADS)

    Tseng, I.-Lun; Lee, Zhao Chuan; Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Ong, Jonathan Yoong Seang

    2018-03-01

    At advanced and mainstream process nodes (e.g., 7nm, 14nm, 22nm, and 55nm process nodes), lithography hotspots can exist in layouts of integrated circuits even if the layouts pass design rule checking (DRC). Existence of lithography hotspots in a layout can cause manufacturability issues, which can result in yield losses of manufactured integrated circuits. In order to detect lithography hotspots existing in physical layouts, pattern matching (PM) algorithms and commercial PM tools have been developed. However, there are still needs to use DRC tools to perform PM operations. In this paper, we propose a PM synthesis methodology, which uses a continuous refinement technique, for the automatic synthesis of a given lithography hotspot pattern into a DRC deck, which consists of layer operation commands, so that an equivalent PM operation can be performed by executing the synthesized deck with the use of a DRC tool. Note that the proposed methodology can deal with not only exact patterns, but also range patterns. Also, lithography hotspot patterns containing multiple layers can be processed. Experimental results show that the proposed methodology can accurately and efficiently detect lithography hotspots in physical layouts.

  19. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  20. SFESA: a web server for pairwise alignment refinement by secondary structure shifts.

    PubMed

    Tong, Jing; Pei, Jimin; Grishin, Nick V

    2015-09-03

    Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.

  1. Simulation of the Shallow Ground-Water-Flow System near Grindstone Creek and the Community of New Post, Sawyer County, Wisconsin

    USGS Publications Warehouse

    Juckem, Paul F.; Hunt, Randall J.

    2007-01-01

    A two-dimensional, steady-state ground-water-flow model of Grindstone Creek, the New Post community, and the surrounding areas was developed using the analytic element computer code GFLOW. The parameter estimation code UCODE was used to obtain a best fit of the model to measured water levels and streamflows. The calibrated model was then used to simulate the effect of ground-water pumping on base flow in Grindstone Creek. Local refinements to the regional model were subsequently added in the New Post area, and local water-level data were used to evaluate the regional model calibration. The locally refined New Post model was also used to simulate the areal extent of capture for two existing water-supply wells and two possible replacement wells. Calibration of the regional Grindstone Creek simulation resulted in horizontal hydraulic conductivity values of 58.2 feet per day (ft/d) for the regional glacial and sandstone aquifer and 7.9 ft/d for glacial thrust-mass areas. Ground-water recharge in the calibrated regional model was 10.1 inches per year. Simulation of a golf-course irrigation well, located roughly 4,000 feet away from the creek, and pumping at 46 gallons per minute (0.10 cubic feet per second, ft3/s), reduced base flow in Grindstone Creek by 0.05 ft3/s, or 0.6 percent of the median base flow during water year 2003, compared to the calibrated model simulation without pumping. A simulation of peak pumping periods (347 gallons per minute or 0.77 ft3/s) reduced base flow in Grindstone Creek by 0.4 ft3/s (4.9 percent of the median base flow). Capture zones for existing and possible replacement wells delineated by the local New Post simulation extend from the well locations to an area south of the pumping well locations. Shallow crystalline bedrock, generally located south of the community, limits the extent of the aquifer and thus the southerly extent of the capture zones. Simulated steady-state pumping at a rate of 9,600 gallons per day (gal/d) from a possible replacement well near the Chippewa Flowage induced 70 gal/d of water from the flowage to enter the aquifer. Although no water-quality samples were collected from the Chippewa Flowage or the ground-water system, surface-water leakage into the ground-water system could potentially change the local water quality in the aquifer.

  2. The Development and Refinement of an e-Health Screening, Brief Intervention, and Referral to Treatment for Parents to Prevent Childhood Obesity in Primary Care.

    PubMed

    Avis, Jillian L S; Holt, Nicholas L; Maximova, Katerina; van Mierlo, Trevor; Fournier, Rachel; Padwal, Raj; Cave, Andrew L; Martz, Patricia; Ball, Geoff D C

    2016-05-01

    Nearly one-third of Canadian children can be categorized as overweight or obese. There is a growing interest in applying e-health approaches to prevent unhealthy weight gain in children, especially in settings that families access regularly. Our objective was to develop and refine an e-health screening, brief intervention, and referral to treatment (SBIRT) for parents to help prevent childhood obesity in primary care. Our SBIRT, titled the Resource Information Program for Parents on Lifestyle and Education (RIPPLE), was developed by our research team and an e-health intervention development company. RIPPLE was based on existing SBIRT models and contemporary literature on children's lifestyle behaviors. Refinements to RIPPLE were guided by feedback from five focus groups (6-10 participants per group) that documented perceptions of the SBIRT by participants (healthcare professionals [n = 20], parents [n = 10], and researchers and graduate trainees [n = 8]). Focus group commentaries were transcribed in real time using a court reporter. Data were analyzed thematically. Participants viewed RIPPLE as a practical, well-designed, and novel tool to facilitate the prevention of childhood obesity in primary care. However, they also perceived that RIPPLE may elicit negative reactions from some parents and suggested improvements to specific elements (e.g., weight-related terms). RIPPLE may enhance parents' awareness of children's weight status and motivation to change their children's lifestyle behaviors but should be improved prior to implementation. Findings from this research directly informed revisions to our SBIRT, which will undergo preliminary testing in a randomized controlled trial.

  3. Refinement of the Interprofessional Socialization and Valuing Scale (ISVS-21) and Development of 9-Item Equivalent Versions.

    PubMed

    King, Gillian; Orchard, Carole; Khalili, Hossein; Avery, Lisa

    2016-01-01

    Measures of interprofessional (IP) socialization are needed to capture the role of interprofessional education in preparing students and health practitioners to function as part of IP health care teams. The aims of this study were to refine a previously published version of the Interprofessional Socialization and Valuing Scale (the ISVS-24) and create two shorter equivalent forms to be used in pre-post studies. A graded response model was used to identify ISVS items in a practitioner data set (n = 345), with validation (measure invariance) conducted using a separate student sample (n = 341). Analyses indicated a unidimensional 21-item version with excellent measurement properties, Cronbach alpha of 0.988, 95% confidence interval (CI) 0.985-0.991. There was evidence of measure invariance, as there was excellent agreement of the factor scores for the practitioner and student data, intraclass correlation coefficient = 0.993, 95% CI 0.991-0.994. This indicates that the ISVS-21 measures IP socialization consistently across groups. Two 9-item equivalent versions for pre-post use were developed, with excellent agreement between the two forms. The student score agreement for the two item sets was excellent: intraclass correlation coefficient = 0.970, 95% CI 0.963-0.976. The ISVS-21 is a refined measure to assess existing levels of IP socialization in practitioners and students, and relate IP socialization to other important constructs such as IP collaboration and the development of an IP identity. The equivalent versions can be used to assess change in IP socialization as a result of interprofessional education.

  4. A Lung Segmental Model of Chronic Pseudomonas Infection in Sheep

    PubMed Central

    Collie, David; Govan, John; Wright, Steven; Thornton, Elisabeth; Tennant, Peter; Smith, Sionagh; Doherty, Catherine; McLachlan, Gerry

    2013-01-01

    Background Chronic lung infection with Pseudomonas aeruginosa is a major contributor to morbidity, mortality and premature death in cystic fibrosis. A new paradigm for managing such infections is needed, as are relevant and translatable animal models to identify and test concepts. We sought to improve on limitations associated with existing models of infection in small animals through developing a lung segmental model of chronic Pseudomonas infection in sheep. Methodology/Principal Findings Using local lung instillation of P. aeruginosa suspended in agar beads we were able to demonstrate that such infection led to the development of a suppurative, necrotising and pyogranulomatous pneumonia centred on the instilled beads. No overt evidence of organ or systemic compromise was apparent in any animal during the course of infection. Infection persisted in the lungs of individual animals for as long as 66 days after initial instillation. Quantitative microbiology applied to bronchoalveolar lavage fluid derived from infected segments proved an insensitive index of the presence of significant infection in lung tissue (>104 cfu/g). Conclusions/Significance The agar bead model of chronic P. aeruginosa lung infection in sheep is a relevant platform to investigate both the pathobiology of such infections as well as novel approaches to their diagnosis and therapy. Particular ethical benefits relate to the model in terms of refining existing approaches by compromising a smaller proportion of the lung with infection and facilitating longitudinal assessment by bronchoscopy, and also potentially reducing animal numbers through facilitating within-animal comparisons of differential therapeutic approaches. PMID:23874438

  5. Plasma Vehicle Charging Analysis for Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Lallement, L.; McDonald, T.; Norgard, J.; Scully, B.

    2014-01-01

    In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the thermal protection system, but would definitely be required for future GEO, trans-lunar, and extra-lunar missions...

  6. Plasma Vehicle Charging Analysis for Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Scully, B.; Norgard, J.

    2015-01-01

    In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the Thermal Protection System (TPS), but would definitely be required for future GEO, trans-lunar, and extra-lunar missions.

  7. Construction of a 3D model of nattokinase, a novel fibrinolytic enzyme from Bacillus natto. A novel nucleophilic catalytic mechanism for nattokinase.

    PubMed

    Zheng, Zhong-liang; Zuo, Zhen-yu; Liu, Zhi-gang; Tsai, Keng-chang; Liu, Ai-fu; Zou, Guo-lin

    2005-01-01

    A three-dimensional structural model of nattokinase (NK) from Bacillus natto was constructed by homology modeling. High-resolution X-ray structures of Subtilisin BPN' (SB), Subtilisin Carlsberg (SC), Subtilisin E (SE) and Subtilisin Savinase (SS), four proteins with sequential, structural and functional homology were used as templates. Initial models of NK were built by MODELLER and analyzed by the PROCHECK programs. The best quality model was chosen for further refinement by constrained molecular dynamics simulations. The overall quality of the refined model was evaluated. The refined model NKC1 was analyzed by different protein analysis programs including PROCHECK for the evaluation of Ramachandran plot quality, PROSA for testing interaction energies and WHATIF for the calculation of packing quality. This structure was found to be satisfactory and also stable at room temperature as demonstrated by a 300ps long unconstrained molecular dynamics (MD) simulation. Further docking analysis promoted the coming of a new nucleophilic catalytic mechanism for NK, which is induced by attacking of hydroxyl rich in catalytic environment and locating of S221.

  8. Refining the treatment of membrane proteins by coarse-grained models.

    PubMed

    Vorobyov, Igor; Kim, Ilsoo; Chu, Zhen T; Warshel, Arieh

    2016-01-01

    Obtaining a quantitative description of the membrane proteins stability is crucial for understanding many biological processes. However the advance in this direction has remained a major challenge for both experimental studies and molecular modeling. One of the possible directions is the use of coarse-grained models but such models must be carefully calibrated and validated. Here we use a recent progress in benchmark studies on the energetics of amino acid residue and peptide membrane insertion and membrane protein stability in refining our previously developed coarse-grained model (Vicatos et al., Proteins 2014;82:1168). Our refined model parameters were fitted and/or tested to reproduce water/membrane partitioning energetics of amino acid side chains and a couple of model peptides. This new model provides a reasonable agreement with experiment for absolute folding free energies of several β-barrel membrane proteins as well as effects of point mutations on a relative stability for one of those proteins, OmpLA. The consideration and ranking of different rotameric states for a mutated residue was found to be essential to achieve satisfactory agreement with the reference data. © 2015 Wiley Periodicals, Inc.

  9. PyXRD v0.6.7: a free and open-source program to quantify disordered phyllosilicates using multi-specimen X-ray diffraction profile fitting

    NASA Astrophysics Data System (ADS)

    Dumon, M.; Van Ranst, E.

    2016-01-01

    This paper presents a free and open-source program called PyXRD (short for Python X-ray diffraction) to improve the quantification of complex, poly-phasic mixed-layer phyllosilicate assemblages. The validity of the program was checked by comparing its output with Sybilla v2.2.2, which shares the same mathematical formalism. The novelty of this program is the ab initio incorporation of the multi-specimen method, making it possible to share phases and (a selection of) their parameters across multiple specimens. PyXRD thus allows for modelling multiple specimens side by side, and this approach speeds up the manual refinement process significantly. To check the hypothesis that this multi-specimen set-up - as it effectively reduces the number of parameters and increases the number of observations - can also improve automatic parameter refinements, we calculated X-ray diffraction patterns for four theoretical mineral assemblages. These patterns were then used as input for one refinement employing the multi-specimen set-up and one employing the single-pattern set-ups. For all of the assemblages, PyXRD was able to reproduce or approximate the input parameters with the multi-specimen approach. Diverging solutions only occurred in single-pattern set-ups, which do not contain enough information to discern all minerals present (e.g. patterns of heated samples). Assuming a correct qualitative interpretation was made and a single pattern exists in which all phases are sufficiently discernible, the obtained results indicate a good quantification can often be obtained with just that pattern. However, these results from theoretical experiments cannot automatically be extrapolated to all real-life experiments. In any case, PyXRD has proven to be useful when X-ray diffraction patterns are modelled for complex mineral assemblages containing mixed-layer phyllosilicates with a multi-specimen approach.

  10. On the impact of a refined stochastic model for airborne LiDAR measurements

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig

    2016-09-01

    Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.

  11. Refining Students' Explanations of an Unfamiliar Physical Phenomenon-Microscopic Friction

    NASA Astrophysics Data System (ADS)

    Corpuz, Edgar De Guzman; Rebello, N. Sanjay

    2017-08-01

    The first phase of this multiphase study involves modeling of college students' thinking of friction at the microscopic level. Diagnostic interviews were conducted with 11 students with different levels of physics backgrounds. A phenomenographic approach of data analysis was used to generate categories of responses which subsequently were used to generate a model of explanation. Most of the students interviewed consistently used mechanical interactions in explaining microscopic friction. According to these students, friction is due to the interlocking or rubbing of atoms. Our data suggest that students' explanations of microscopic friction are predominantly influenced by their macroscopic experiences. In the second phase of the research, teaching experiment was conducted with 18 college students to investigate how students' explanations of microscopic friction can be refined by a series of model-building activities. Data were analyzed using Redish's two-level transfer framework. Our results show that through sequences of hands-on and minds-on activities, including cognitive dissonance and resolution, it is possible to facilitate the refinement of students' explanations of microscopic friction. The activities seemed to be productive in helping students activate associations that refine their ideas about microscopic friction.

  12. Molecular dynamics-based refinement and validation for sub-5 Å cryo-electron microscopy maps

    PubMed Central

    Singharoy, Abhishek; Teo, Ivan; McGreevy, Ryan; Stone, John E; Zhao, Jianhua; Schulten, Klaus

    2016-01-01

    Two structure determination methods, based on the molecular dynamics flexible fitting (MDFF) paradigm, are presented that resolve sub-5 Å cryo-electron microscopy (EM) maps with either single structures or ensembles of such structures. The methods, denoted cascade MDFF and resolution exchange MDFF, sequentially re-refine a search model against a series of maps of progressively higher resolutions, which ends with the original experimental resolution. Application of sequential re-refinement enables MDFF to achieve a radius of convergence of ~25 Å demonstrated with the accurate modeling of β-galactosidase and TRPV1 proteins at 3.2 Å and 3.4 Å resolution, respectively. The MDFF refinements uniquely offer map-model validation and B-factor determination criteria based on the inherent dynamics of the macromolecules studied, captured by means of local root mean square fluctuations. The MDFF tools described are available to researchers through an easy-to-use and cost-effective cloud computing resource on Amazon Web Services. DOI: http://dx.doi.org/10.7554/eLife.16105.001 PMID:27383269

  13. Exploiting distant homologues for phasing through the generation of compact fragments, local fold refinement and partial solution combination.

    PubMed

    Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel

    2018-04-01

    Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.

  14. Expanded modeling of temperature-dependent dielectric properties for microwave thermal ablation

    PubMed Central

    Ji, Zhen; Brace, Christopher L

    2011-01-01

    Microwaves are a promising source for thermal tumor ablation due to their ability to rapidly heat dispersive biological tissues, often to temperatures in excess of 100 °C. At these high temperatures, tissue dielectric properties change rapidly and, thus, so do the characteristics of energy delivery. Precise knowledge of how tissue dielectric properties change during microwave heating promises to facilitate more accurate simulation of device performance and helps optimize device geometry and energy delivery parameters. In this study, we measured the dielectric properties of liver tissue during high-temperature microwave heating. The resulting data were compiled into either a sigmoidal function of temperature or an integration of the time–temperature curve for both relative permittivity and effective conductivity. Coupled electromagnetic–thermal simulations of heating produced by a single monopole antenna using the new models were then compared to simulations with existing linear and static models, and experimental temperatures in liver tissue. The new sigmoidal temperature-dependent model more accurately predicted experimental temperatures when compared to temperature–time integrated or existing models. The mean percent differences between simulated and experimental temperatures over all times were 4.2% for sigmoidal, 10.1% for temperature–time integration, 27.0% for linear and 32.8% for static models at the antenna input power of 50 W. Correcting for tissue contraction improved agreement for powers up to 75 W. The sigmoidal model also predicted substantial changes in heating pattern due to dehydration. We can conclude from these studies that a sigmoidal model of tissue dielectric properties improves prediction of experimental results. More work is needed to refine and generalize this model. PMID:21791728

  15. Culturally Diverse Undergraduate Researchers' Academic Outcomes and Perceptions of Their Research Mentoring Relationships

    NASA Astrophysics Data System (ADS)

    Byars-Winston, Angela M.; Branchaw, Janet; Pfund, Christine; Leverett, Patrice; Newton, Joseph

    2015-10-01

    Few studies have empirically investigated the specific factors in mentoring relationships between undergraduate researchers (mentees) and their mentors in the biological and life sciences that account for mentees' positive academic and career outcomes. Using archival evaluation data from more than 400 mentees gathered over a multi-year period (2005-2011) from several undergraduate biology research programs at a large, Midwestern research university, we validated existing evaluation measures of the mentored research experience and the mentor-mentee relationship. We used a subset of data from mentees (77% underrepresented racial/ethnic minorities) to test a hypothesized social cognitive career theory model of associations between mentees' academic outcomes and perceptions of their research mentoring relationships. Results from path analysis indicate that perceived mentor effectiveness indirectly predicted post-baccalaureate outcomes via research self-efficacy beliefs. Findings are discussed with implications for developing new and refining existing tools to measure this impact, programmatic interventions to increase the success of culturally diverse research mentees and future directions for research.

  16. Empowering Nurses to Lead Interprofessional Collaborative Practice Environments Through a Nurse Leadership Institute.

    PubMed

    Embree, Jennifer L; Wagnes, Lisa; Hendricks, Susan; LaMothe, Julie; Halstead, Judith; Wright, Lauren

    2018-02-01

    A year-long Nurse Leadership Institute (NLI) for emerging leaders in primary care clinics and acute care environments was developed, implemented, and evaluated. The NLI's goal was to foster empowerment in interprofessional collaborative practice environments for nurses in the three cohorts of NLIs. The NLI was framed around the Five Leadership Practices of modeling the way, inspiring a shared vision, challenging the process, enabling others to act, and encouraging the heart. To create a professional learning environment, foster community, and enhance leadership skills, the Lean In Circle materials developed by Sandberg were adapted for content reorganization and discussion. Minimal literature exists specifically addressing nursing leadership professionals' development based on Sandberg's Circle materials. The findings of the three NLI cohorts reported in this article begin to fill this existing knowledge gap. Participants reported a significant increase in leadership skills. Recommendations for refinement of future NLI offerings are provided. J Contin Educ Nurs. 2018;49(2):61-71. Copyright 2018, SLACK Incorporated.

  17. Refined carbohydrate intake in relation to non-verbal intelligence among Tehrani schoolchildren.

    PubMed

    Abargouei, Amin Salehi; Kalantari, Naser; Omidvar, Nasrin; Rashidkhani, Bahram; Rad, Anahita Houshiar; Ebrahimi, Azizeh Afkham; Khosravi-Boroujeni, Hossein; Esmaillzadeh, Ahmad

    2012-10-01

    Nutrition has long been considered one of the most important environmental factors affecting human intelligence. Although carbohydrates are the most widely studied nutrient for their possible effects on cognition, limited data are available linking usual refined carbohydrate intake and intelligence. The present study was conducted to examine the relationship between long-term refined carbohydrate intake and non-verbal intelligence among schoolchildren. Cross-sectional study. Tehran, Iran. In this cross-sectional study, 245 students aged 6-7 years were selected from 129 elementary schools in two western regions of Tehran. Anthropometric measurements were carried out. Non-verbal intelligence and refined carbohydrate consumption were determined using Raven's Standard Progressive Matrices test and a modified sixty-seven-item FFQ, respectively. Data about potential confounding variables were collected. Linear regression analysis was applied to examine the relationship between non-verbal intelligence scores and refined carbohydrate consumption. Individuals in top tertile of refined carbohydrate intake had lower mean non-verbal intelligence scores in the crude model (P < 0.038). This association remained significant after controlling for age, gender, birth date, birth order and breast-feeding pattern (P = 0.045). However, further adjustments for mother's age, mother's education, father's education, parental occupation and BMI made the association statistically non-significant. We found a significant inverse association between refined carbohydrate consumption and non-verbal intelligence scores in regression models (β = -11.359, P < 0.001). This relationship remained significant in multivariate analysis after controlling for potential confounders (β = -8.495, P = 0.038). The study provides evidence indicating an inverse relationship between refined carbohydrate consumption and non-verbal intelligence among Tehrani children aged 6-7 years. Prospective studies are needed to confirm our findings.

  18. Numerical analysis of impurity separation from waste salt by investigating the change of concentration at the interface during zone refining process

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Gil; Shim, Moonsoo; Lee, Jong-Hyeon; Yi, Kyung-Woo

    2017-09-01

    The waste salt treatment process is required for the reuse of purified salts, and for the disposal of the fission products contained in waste salt during pyroprocessing. As an alternative to existing fission product separation methods, the horizontal zone refining process is used in this study for the purification of waste salt. In order to evaluate the purification ability of the process, three-dimensional simulation is conducted, considering heat transfer, melt flow, and mass transfer. Impurity distributions and decontamination factors are calculated as a function of the heater traverse rate, by applying a subroutine and the equilibrium segregation coefficient derived from the effective segregation coefficients. For multipass cases, 1d solutions and the effective segregation coefficient obtained from three-dimensional simulation are used. In the present study, the topic is not dealing with crystal growth, but the numerical technique used is nearly the same since the zone refining technique was just introduced in the treatment of waste salt from nuclear power industry because of its merit of simplicity and refining ability. So this study can show a new application of single crystal growth techniques to other fields, by taking advantage of the zone refining multipass possibility. The final goal is to achieve the same high degree of decontamination in the waste salt as in zone freezing (or reverse Bridgman) method.

  19. Final report to the Florida Department of Transportation Systems Planning Office on project "Improvements and enhancements to LOSPLAN 2007".

    DOT National Transportation Integrated Search

    2011-03-01

    This project addressed several aspects of the LOSPLAN software, primarily with respect to incorporating : new FDOT and NCHRP research project results. In addition, some existing computational methodology : aspects were refined to provide more accurat...

  20. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    NASA Astrophysics Data System (ADS)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  1. Active Exploration of Large 3D Model Repositories.

    PubMed

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  2. Thermal model of attic systems with radiant barriers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, K.E.

    This report summarizes the first phase of a project to model the thermal performance of radiant barriers. The objective of this phase of the project was to develop a refined model for the thermal performance of residential house attics, with and without radiant barriers, and to verify the model by comparing its predictions against selected existing experimental thermal performance data. Models for the thermal performance of attics with and without radiant barriers have been developed and implemented on an IBM PC/AT computer. The validity of the models has been tested by comparing their predictions with ceiling heat fluxes measured inmore » a number of laboratory and field experiments on attics with and without radiant barriers. Cumulative heat flows predicted by the models were usually within about 5 to 10 percent of measured values. In future phases of the project, the models for attic/radiant barrier performance will be coupled with a whole-house model and further comparisons with experimental data will be made. Following this, the models will be utilized to provide an initial assessment of the energy savings potential of radiant barriers in various configurations and under various climatic conditions. 38 refs., 14 figs., 22 tabs.« less

  3. Ventilation tube insertion simulation: a literature review and validity assessment of five training models.

    PubMed

    Mahalingam, S; Awad, Z; Tolley, N S; Khemani, S

    2016-08-01

    The objective of this study was to identify and investigate the face and content validity of ventilation tube insertion (VTI) training models described in the literature. A review of literature was carried out to identify articles describing VTI simulators. Feasible models were replicated and assessed by a group of experts. Postgraduate simulation centre. Experts were defined as surgeons who had performed at least 100 VTI on patients. Seventeen experts were participated ensuring sufficient statistical power for analysis. A standardised 18-item Likert-scale questionnaire was used. This addressed face validity (realism), global and task-specific content (suitability of the model for teaching) and curriculum recommendation. The search revealed eleven models, of which only five had associated validity data. Five models were found to be feasible to replicate. None of the tested models achieved face or global content validity. Only one model achieved task-specific validity, and hence, there was no agreement on curriculum recommendation. The quality of simulation models is moderate and there is room for improvement. There is a need for new models to be developed or existing ones to be refined in order to construct a more realistic training platform for VTI simulation. © 2015 John Wiley & Sons Ltd.

  4. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  5. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  6. Counternarcotic Efforts in the Southern Cone: Chile

    DTIC Science & Technology

    1990-06-30

    deportation is simply not practical . Statistics of cocaine coming into Chile by "ant smuggling" do not exist. Carabineros mentions that according to...existing evidence is contradictory. A recent report ordered by the Ministry of Foreign 10 Affairs does not support the allegation that Chile is being used...Gugliotta and Jeff Leen, Kings of Cocaine (New York: Harper and Row, 1989) p.23. 11 industry based in Chile and controlled by a few refiners who bought

  7. DiffPy-CMI-Python libraries for Complex Modeling Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billinge, Simon; Juhas, Pavol; Farrow, Christopher

    2014-02-01

    Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.

  8. Refining Trait Resilience: Identifying Engineering, Ecological, and Adaptive Facets from Extant Measures of Resilience

    PubMed Central

    Maltby, John; Day, Liz; Hall, Sophie

    2015-01-01

    The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197

  9. Design and development of a cross-cultural disposition inventory

    NASA Astrophysics Data System (ADS)

    Davies, Randall; Zaugg, Holt; Tateishi, Isaku

    2015-01-01

    Advances in technology have increased the likelihood that engineers will have to work in a global, culturally diverse setting. Many schools of engineering are currently revising their curricula to help students to develop cultural competence. However, our ability to measure cultural dispositions can be a challenge. The purpose of this project was to develop and test an instrument that measures the various aspects of cultural disposition. The results of the validation process verified that the hypothesised model adequately represented the data. The refined instrument produced a four-factor model for the overall construct. The validation process for the instrument verified the existence of specific subcomponents that form the overall cultural disposition construct. There also seems to be a hierarchical relationship within the subcomponents of cultural disposition. Additional research is needed to explore which aspects of cultural disposition affect an individual's ability to work effectively in a culturally diverse engineering team.

  10. Analysis of rotor vibratory loads using higher harmonic pitch control

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.

    1992-01-01

    Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.

  11. Classical Molecular Dynamics Simulation of Nuclear Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devanathan, Ram; Krack, Matthias; Bertolus, Marjorie

    2015-10-10

    Molecular dynamics simulation is well suited to study primary damage production by irradiation, defect interactions with fission gas atoms, gas bubble nucleation, grain boundary effects on defect and gas bubble evolution in nuclear fuel, and the resulting changes in thermo-mechanical properties. In these simulations, the forces on the ions are dictated by interaction potentials generated by fitting properties of interest to experimental data. The results obtained from the present generation of potentials are qualitatively similar, but quantitatively different. There is a need to refine existing potentials to provide a better representation of the performance of polycrystalline fuel under a varietymore » of operating conditions, and to develop models that are equipped to handle deviations from stoichiometry. In addition to providing insights into fundamental mechanisms governing the behaviour of nuclear fuel, MD simulations can also provide parameters that can be used as inputs for mesoscale models.« less

  12. A novel approach for individual tree crown delineation using lidar data

    NASA Astrophysics Data System (ADS)

    Liu, Tao

    Individual tree crown delineation (ITCD) is an important technique to support precision forestry. ITCD is particularly difficult for deciduous forests where the existence of multiple branches can lead to false tree top detection. This thesis focused on developing a new ITCD model, which consists of two components: (1) boundary refinement using a novel algorithm called Fishing Net Dragging (FiND), and (2) segment merging using boundary classification. The proposed ITCD model was tested in both deciduous and mixed forests, attaining an overall accuracy of 74% and 78%, respectively. This compared favorably to an ITCD method commonly cited in the literature, which attained 41% and 51% on the same plots. To facilitate comparison of research in the ITCD community, this thesis also developed a new accuracy assessment scheme for ITCD. This new accuracy assessment is easy to interpret and convenient to implement while comprehensively evaluating ITCD accuracy.

  13. PREFMD: a web server for protein structure refinement via molecular dynamics simulations.

    PubMed

    Heo, Lim; Feig, Michael

    2018-03-15

    Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.

  14. A refined model of sedimentary rock cover in the southeastern part of the Congo basin from GOCE gravity and vertical gravity gradient observations

    NASA Astrophysics Data System (ADS)

    Martinec, Zdeněk; Fullea, Javier

    2015-03-01

    We aim to interpret the vertical gravity and vertical gravity gradient of the GOCE-GRACE combined gravity model over the southeastern part of the Congo basin to refine the published model of sedimentary rock cover. We use the GOCO03S gravity model and evaluate its spherical harmonic representation at or near the Earth's surface. In this case, the gradiometry signals are enhanced as compared to the original measured GOCE gradients at satellite height and better emphasize the spatial pattern of sedimentary geology. To avoid aliasing, the omission error of the modelled gravity induced by the sedimentary rocks is adjusted to that of the GOCO03S gravity model. The mass-density Green's functions derived for the a priori structure of the sediments show a slightly greater sensitivity to the GOCO03S vertical gravity gradient than to the vertical gravity. Hence, the refinement of the sedimentary model is carried out for the vertical gravity gradient over the basin, such that a few anomalous values of the GOCO03S-derived vertical gravity gradient are adjusted by refining the model. We apply the 5-parameter Helmert's transformation, defined by 2 translations, 1 rotation and 2 scale parameters that are searched for by the steepest descent method. The refined sedimentary model is only slightly changed with respect to the original map, but it significantly improves the fit of the vertical gravity and vertical gravity gradient over the basin. However, there are still spatial features in the gravity and gradiometric data that remain unfitted by the refined model. These may be due to lateral density variation that is not contained in the model, a density contrast at the Moho discontinuity, lithospheric density stratifications or mantle convection. In a second step, the refined sedimentary model is used to find the vertical density stratification of sedimentary rocks. Although the gravity data can be interpreted by a constant sedimentary density, such a model does not correspond to the gravitational compaction of sedimentary rocks. Therefore, the density model is extended by including a linear increase in density with depth. Subsequent L2 and L∞ norm minimization procedures are applied to find the density parameters by adjusting both the vertical gravity and the vertical gravity gradient. We found that including the vertical gravity gradient in the interpretation of the GOCO03S-derived data reduces the non-uniqueness of the inverse gradiometric problem for density determination. The density structure of the sedimentary formations that provide the optimum predictions of the GOCO03S-derived gravity and vertical gradient of gravity consists of a surface density contrast with respect to surrounding rocks of 0.24-0.28 g/cm3 and its decrease with depth of 0.05-0.25 g/cm3 per 10 km. Moreover, the case where the sedimentary rocks are gravitationally completely compacted in the deepest parts of the basin is supported by L∞ norm minimization. However, this minimization also allows a remaining density contrast at the deepest parts of the sedimentary basin of about 0.1 g/cm3.

  15. Damage assessment of composite plate structures with material and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Chandrashekhar, M.; Ganguli, Ranjan

    2016-06-01

    Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problem in damage assessment. A recently developed C0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data.

  16. The opportunistic transmission of wireless worms between mobile devices

    NASA Astrophysics Data System (ADS)

    Rhodes, C. J.; Nekovee, M.

    2008-12-01

    The ubiquity of portable wireless-enabled computing and communications devices has stimulated the emergence of malicious codes (wireless worms) that are capable of spreading between spatially proximal devices. The potential exists for worms to be opportunistically transmitted between devices as they move around, so human mobility patterns will have an impact on epidemic spread. The scenario we address in this paper is proximity attacks from fleetingly in-contact wireless devices with short-range communication range, such as Bluetooth-enabled smart phones. An individual-based model of mobile devices is introduced and the effect of population characteristics and device behaviour on the outbreak dynamics is investigated. The model uses straight-line motion to achieve population, though it is recognised that this is a highly simplified representation of human mobility patterns. We show that the contact rate can be derived from the underlying mobility model and, through extensive simulation, that mass-action epidemic models remain applicable to worm spreading in the low density regime studied here. The model gives useful analytical expressions against which more refined simulations of worm spread can be developed and tested.

  17. Effective behavioral modeling and prediction even when few exemplars are available

    NASA Astrophysics Data System (ADS)

    Goan, Terrance; Kartha, Neelakantan; Kaneshiro, Ryan

    2006-05-01

    While great progress has been made in the lowest levels of data fusion, practical advances in behavior modeling and prediction remain elusive. The most critical limitation of existing approaches is their inability to support the required knowledge modeling and continuing refinement under realistic constraints (e.g., few historic exemplars, the lack of knowledge engineering support, and the need for rapid system deployment). This paper reports on our ongoing efforts to develop Propheteer, a system which will address these shortcomings through two primary techniques. First, with Propheteer we abandon the typical consensus-driven modeling approaches that involve infrequent group decision making sessions in favor of an approach that solicits asynchronous knowledge contributions (in the form of alternative future scenarios and indicators) without burdening the user with endless certainty or probability estimates. Second, we enable knowledge contributions by personnel beyond the typical core decision making group, thereby casting light on blind spots, mitigating human biases, and helping maintain the currency of the developed behavior models. We conclude with a discussion of the many lessons learned in the development of our prototype Propheteer system.

  18. An Applied Ecological Framework for Evaluating Infrastructure to Promote Walking and Cycling: The iConnect Study

    PubMed Central

    Bull, Fiona; Powell, Jane; Cooper, Ashley R.; Brand, Christian; Mutrie, Nanette; Preston, John; Rutter, Harry

    2011-01-01

    Improving infrastructure for walking and cycling is increasingly recommended as a means to promote physical activity, prevent obesity, and reduce traffic congestion and carbon emissions. However, limited evidence from intervention studies exists to support this approach. Drawing on classic epidemiological methods, psychological and ecological models of behavior change, and the principles of realistic evaluation, we have developed an applied ecological framework by which current theories about the behavioral effects of environmental change may be tested in heterogeneous and complex intervention settings. Our framework guides study design and analysis by specifying the most important data to be collected and relations to be tested to confirm or refute specific hypotheses and thereby refine the underlying theories. PMID:21233429

  19. Measurement of the electron structure function F2e at LEP energies

    NASA Astrophysics Data System (ADS)

    Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Belous, K.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Gonçalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; Mc Nulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Slominski, W.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Szwed, J.; Tabarelli, T.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tomé, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.; Delphi Collaboration

    2014-10-01

    The hadronic part of the electron structure function F2e has been measured for the first time, using e+e- data collected by the DELPHI experiment at LEP, at centre-of-mass energies of √{ s} = 91.2- 209.5 GeV. The data analysis is simpler than that of the measurement of the photon structure function. The electron structure function F2e data are compared to predictions of phenomenological models based on the photon structure function. It is shown that the contribution of large target photon virtualities is significant. The data presented can serve as a cross-check of the photon structure function F2γ analyses and help in refining existing parameterisations.

  20. Measurement of the spatial dependence of temperature and gas and soot concentrations within large open hydrocarbon fuel fires

    NASA Technical Reports Server (NTRS)

    Johnson, H. T.; Linley, L. J.; Mansfield, J. A.

    1982-01-01

    A series of large-scale JP-4 fuel pool fire tests was conducted to refine existing mathematical models of large fires. Seven tests were conducted to make chemical concentration and temperature measurements in 7.5 and 15 meter-diameter pool fires. Measurements were made at heights of 0.7, 1.4, 2.9, 5.7, 11.4, and 21.3 meters above the fires. Temperatures were measured at up to 50 locations each second during the fires. Chemistry samples were taken at up to 23 locations within the fires and analyzed for combustion chemistry and soot concentration. Temperature and combustion chemistry profiles obtained during two 7.5 meter-diameter and two 15 meter-diameter fires are included.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  2. Refined structure of dimeric diphtheria toxin at 2.0 A resolution.

    PubMed Central

    Bennett, M. J.; Choe, S.; Eisenberg, D.

    1994-01-01

    The refined structure of dimeric diphtheria toxin (DT) at 2.0 A resolution, based on 37,727 unique reflections (F > 1 sigma (F)), yields a final R factor of 19.5% with a model obeying standard geometry. The refined model consists of 523 amino acid residues, 1 molecule of the bound dinucleotide inhibitor adenylyl 3'-5' uridine 3' monophosphate (ApUp), and 405 well-ordered water molecules. The 2.0-A refined model reveals that the binding motif for ApUp includes residues in the catalytic and receptor-binding domains and is different from the Rossmann dinucleotide-binding fold. ApUp is bound in part by a long loop (residues 34-52) that crosses the active site. Several residues in the active site were previously identified as NAD-binding residues. Glu 148, previously identified as playing a catalytic role in ADP-ribosylation of elongation factor 2 by DT, is about 5 A from uracil in ApUp. The trigger for insertion of the transmembrane domain of DT into the endosomal membrane at low pH may involve 3 intradomain and 4 interdomain salt bridges that will be weakened at low pH by protonation of their acidic residues. The refined model also reveals that each molecule in dimeric DT has an "open" structure unlike most globular proteins, which we call an open monomer. Two open monomers interact by "domain swapping" to form a compact, globular dimeric DT structure. The possibility that the open monomer resembles a membrane insertion intermediate is discussed. PMID:7833807

  3. Water versus DNA: New insights into proton track-structure modeling in radiobiology and radiotherapy

    DOE PAGES

    Champion, Christophe; Quinto, Michele A.; Monti, Juan M.; ...

    2015-09-25

    Water is a common surrogate of DNA for modelling the charged particle-induced ionizing processes in living tissue exposed to radiations. The present study aims at scrutinizing the validity of this approximation and then revealing new insights into proton-induced energy transfers by a comparative analysis between water and realistic biological medium. In this context, a self-consistent quantum mechanical modelling of the ionization and electron capture processes is reported within the continuum distorted wave-eikonal initial state framework for both isolated water molecules and DNA components impacted by proton beams. Their respective probability of occurrence-expressed in terms of total cross sections-as well asmore » their energetic signature (potential and kinetic) are assessed in order to clearly emphasize the differences existing between realistic building blocks of living matter and the controverted water-medium surrogate. Thus the consequences in radiobiology and radiotherapy will be discussed in particular in view of treatment planning refinement aiming at better radiotherapy strategies.« less

  4. Water versus DNA: new insights into proton track-structure modelling in radiobiology and radiotherapy.

    PubMed

    Champion, C; Quinto, M A; Monti, J M; Galassi, M E; Weck, P F; Fojón, O A; Hanssen, J; Rivarola, R D

    2015-10-21

    Water is a common surrogate of DNA for modelling the charged particle-induced ionizing processes in living tissue exposed to radiations. The present study aims at scrutinizing the validity of this approximation and then revealing new insights into proton-induced energy transfers by a comparative analysis between water and realistic biological medium. In this context, a self-consistent quantum mechanical modelling of the ionization and electron capture processes is reported within the continuum distorted wave-eikonal initial state framework for both isolated water molecules and DNA components impacted by proton beams. Their respective probability of occurrence-expressed in terms of total cross sections-as well as their energetic signature (potential and kinetic) are assessed in order to clearly emphasize the differences existing between realistic building blocks of living matter and the controverted water-medium surrogate. Consequences in radiobiology and radiotherapy will be discussed in particular in view of treatment planning refinement aiming at better radiotherapy strategies.

  5. On the global dynamics of a chronic myelogenous leukemia model

    NASA Astrophysics Data System (ADS)

    Krishchenko, Alexander P.; Starkov, Konstantin E.

    2016-04-01

    In this paper we analyze some features of global dynamics of a three-dimensional chronic myelogenous leukemia (CML) model with the help of the stability analysis and the localization method of compact invariant sets. The behavior of CML model is defined by concentrations of three cellpopulations circulating in the blood: naive T cells, effector T cells specific to CML and CML cancer cells. We prove that the dynamics of the CML system around the tumor-free equilibrium point is unstable. Further, we compute ultimate upper bounds for all three cell populations and provide the existence conditions of the positively invariant polytope. One ultimate lower bound is obtained as well. Moreover, we describe the iterative localization procedure for refining localization bounds; this procedure is based on cyclic using of localizing functions. Applying this procedure we obtain conditions under which the internal tumor equilibrium point is globally asymptotically stable. Our theoretical analyses are supplied by results of the numerical simulation.

  6. Nursing Education Transformation: Promising Practices in Academic Progression.

    PubMed

    Gorski, Mary Sue; Farmer, Patricia D; Sroczynski, Maureen; Close, Liz; Wortock, Jean M

    2015-09-01

    Health care has changed over the past decade; yet, nursing education has not kept pace with social and scientific advances. The Institute of Medicine report, The Future of Nursing: Leading Change, Advancing Health, called for a more highly educated nursing work-force and an improved nursing education system. Since the release of that report, the Future of Nursing: Campaign for Action, supported by the Robert Wood Johnson Foundation, AARP, and the AARP Foundation, has worked with nursing education leaders to better understand existing and evolving nursing education structures. Through a consensus-building process, four overarching promising practice models, with an emphasis on seamless academic progression, emerged to advance the goals of education transformation. Key nurse educators and other stakeholders refined those models through a series of meetings, collaborative partnerships, and focused projects that were held across the United States. This article summarizes that process and provides a description of the models, challenges, common themes, recommendations, and progress to date. Copyright 2015, SLACK Incorporated.

  7. An application of miniscale experiments on Earth to refine microgravity analysis of adiabatic multiphase flow in space

    NASA Technical Reports Server (NTRS)

    Rothe, Paul H.; Martin, Christine; Downing, Julie

    1994-01-01

    Adiabatic two-phase flow is of interest to the design of multiphase fluid and thermal management systems for spacecraft. This paper presents original data and unifies existing data for capillary tubes as a step toward assessing existing multiphase flow analysis and engineering software. Comparisons of theory with these data once again confirm the broad accuracy of the theory. Due to the simplicity and low cost of the capillary tube experiments, which were performed on earth, we were able to closely examine for the first time a flow situation that had not previously been examined appreciably by aircraft tests. This is the situation of a slug flow at high quality, near transition to annular flow. Our comparison of software calculations with these data revealed overprediction of pipeline pressure drop by up to a factor of three. In turn, this finding motivated a reexamination of the existing theory, and then development of a new analytical and is in far better agreement with the data. This sequence of discovery illustrates the role of inexpensive miniscale modeling on earth to anticipate microgravity behavior in space and to complete and help define needs for aircraft tests.

  8. Knowledge Management Systems: Linking Contribution, Refinement and Use

    ERIC Educational Resources Information Center

    Chung, Ting-ting

    2009-01-01

    Electronic knowledge repositories represent one of the fundamental tools for knowledge management (KM) initiatives. Existing research, however, has largely focused on supply-side driven research questions, such as employee motivation to contribute knowledge to a repository. This research turns attention to the dynamic relationship between the…

  9. NATIONAL PREPAREDNESS: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy

    DTIC Science & Technology

    2002-06-07

    Continue to Develop and Refine Emerging Technology • Some of the emerging biometric devices, such as iris scans and facial recognition systems...such as iris scans and facial recognition systems, facial recognition systems, and speaker verification systems. (976301)

  10. 40 CFR 63.9882 - What parts of my plant does this subpart cover?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...

  11. 40 CFR 63.9882 - What parts of my plant does this subpart cover?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...

  12. 40 CFR 63.9882 - What parts of my plant does this subpart cover?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...

  13. 40 CFR 63.9882 - What parts of my plant does this subpart cover?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...

  14. 40 CFR 63.9882 - What parts of my plant does this subpart cover?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants for Primary Magnesium... affected sources are each new and existing primary magnesium refining facility. (b) This subpart covers emissions from each spray dryer stack, magnesium chloride storage bins scrubber stack, melt/reactor system...

  15. Database of emission lines

    NASA Astrophysics Data System (ADS)

    Binette, L.; Ortiz, P.; Joguet, B.; Rola, C.

    1998-11-01

    A widely accessible data bank (available through Netscape) and consiting of all (or most) of the emission lines reported in the litterature is being built. It will comprise objects as diverse as HII regions, PN, AGN, HHO. One of its use will be to define/refine existing diagnostic emission line diagrams.

  16. Academic Provenance: Mapping Geoscience Students' Academic Pathways to their Career Trajectories

    NASA Astrophysics Data System (ADS)

    Houlton, H. R.; Gonzales, L. M.; Keane, C. M.

    2011-12-01

    Targeted recruitment and retention efforts for the geosciences have become increasingly important with the growing concerns about program visibility on campuses, and given that geoscience degree production remains low relative to the demand for new geoscience graduates. Furthermore, understanding the career trajectories of geoscience degree recipients is essential for proper occupational placement. A theoretical framework was developed by Houlton (2010) to focus recruitment and retention efforts. This "pathway model" explicitly maps undergraduate students' geoscience career trajectories, which can be used to refine existing methods for recruiting students into particular occupations. Houlton's (2010) framework identified three main student population groups: Natives, Immigrants or Refugees. Each student followed a unique pathway, which consisted of six pathway steps. Each pathway step was comprised of critical incidents that influenced students' overall career trajectories. An aggregate analysis of students' pathways (Academic Provenance Analysis) showed that different populations' pathways exhibited a deviation in career direction: Natives indicated intentions to pursue industry or government sectors, while Immigrants intended to pursue academic or research-based careers. We expanded on Houlton's (2010) research by conducting a follow-up study to determine if the original participants followed the career trajectories they initially indicated in the 2010 study. A voluntary, 5-question, short-answer survey was administered via email. We investigated students' current pathway steps, pathway deviations, students' goals for the near future and their ultimate career ambitions. This information may help refine Houlton's (2010) "pathway model" and may aid geoscience employers in recruiting the new generation of professionals for their respective sectors.

  17. Chiral pathways in DNA dinucleotides using gradient optimized refinement along metastable borders

    NASA Astrophysics Data System (ADS)

    Romano, Pablo; Guenza, Marina

    We present a study of DNA breathing fluctuations using Markov state models (MSM) with our novel refinement procedure. MSM have become a favored method of building kinetic models, however their accuracy has always depended on using a significant number of microstates, making the method costly. We present a method which optimizes macrostates by refining borders with respect to the gradient along the free energy surface. As the separation between macrostates contains highest discretization errors, this method corrects for any errors produced by limited microstate sampling. Using our refined MSM methods, we investigate DNA breathing fluctuations, thermally induced conformational changes in native B-form DNA. Running several microsecond MD simulations of DNA dinucleotides of varying sequences, to include sequence and polarity effects, we've analyzed using our refined MSM to investigate conformational pathways inherent in the unstacking of DNA bases. Our kinetic analysis has shown preferential chirality in unstacking pathways that may be critical in how proteins interact with single stranded regions of DNA. These breathing dynamics can help elucidate the connection between conformational changes and key mechanisms within protein-DNA recognition. NSF Chemistry Division (Theoretical Chemistry), the Division of Physics (Condensed Matter: Material Theory), XSEDE.

  18. Phase field models for heterogeneous nucleation: Application to inoculation in alpha-solidifying Ti-Al-B alloys

    NASA Astrophysics Data System (ADS)

    Apel, M.; Eiken, J.; Hecht, U.

    2014-02-01

    This paper aims at briefly reviewing phase field models applied to the simulation of heterogeneous nucleation and subsequent growth, with special emphasis on grain refinement by inoculation. The spherical cap and free growth model (e.g. A.L. Greer, et al., Acta Mater. 48, 2823 (2000)) has proven its applicability for different metallic systems, e.g. Al or Mg based alloys, by computing the grain refinement effect achieved by inoculation of the melt with inert seeding particles. However, recent experiments with peritectic Ti-Al-B alloys revealed that the grain refinement by TiB2 is less effective than predicted by the model. Phase field simulations can be applied to validate the approximations of the spherical cap and free growth model, e.g. by computing explicitly the latent heat release associated with different nucleation and growth scenarios. Here, simulation results for point-shaped nucleation, as well as for partially and completely wetted plate-like seed particles will be discussed with respect to recalescence and impact on grain refinement. It will be shown that particularly for large seeding particles (up to 30 μm), the free growth morphology clearly deviates from the assumed spherical cap and the initial growth - until the free growth barrier is reached - significantly contributes to the latent heat release and determines the recalescence temperature.

  19. Sociable Interfaces

    DTIC Science & Technology

    2005-01-01

    Interface Compatibility); the tool is written in Ocaml [10], and the symbolic algorithms for interface compatibility and refinement are built on top...automata for a fire detection and reporting system. be encoded in the input language of the tool TIC. The refinement of sociable interfaces is discussed...are closely related to the I/O Automata Language (IOA) of [11]. Interface models are games between Input and Output, and in the models, it is es

  20. [Development of a program theory as a basis for the evaluation of a dementia special care unit].

    PubMed

    Adlbrecht, Laura; Bartholomeyczik, Sabine; Mayer, Hanna

    2018-06-01

    Background: An existing dementia special care unit should be evaluated. In order to build a sound foundation of the evaluation a deep theoretical understanding of the implemented intervention is needed, which has not been explicated yet. One possibility to achieve this is the development of a program theory. Aim: The aim is to present a method to develop a program theory for the existing living and care concept of the dementia special care unit, which is used in a larger project to evaluate the concept theory-drivenly. Method: The evaluation is embedded in the framework of van Belle et al. (2010) and an action model and a change model (Chen, 2015) is created. For the specification of the change model the contribution analysis (Mayne, 2011) is applied. Data were collected in workshops with the developers and the nurses of the dementia special care unit and a literature research concerning interventions and outcomes was carried out. The results were synthesized in a consens workshop. Results: The action model describes the interventions of the dementia special care unit, the implementers, the organization and the context. The change model compromises the mechanisms through which interventions achieve outcomes. Conclusions: The results of the program theory can be employed to choose data collection methods and instruments for the evaluation. On the basis of the results of the evaluation the program theory can be refined and adapted.

  1. Homology Modeling of Dopamine D2 and D3 Receptors: Molecular Dynamics Refinement and Docking Evaluation

    PubMed Central

    Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio

    2012-01-01

    Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199

  2. Simulation of the shallow groundwater-flow system in the Forest County Potawatomi Community, Forest County, Wisconsin

    USGS Publications Warehouse

    Fienen, Michael N.; Saad, David A.; Juckem, Paul F.

    2013-01-01

    The shallow groundwater system in the Forest County Potawatomi Comminity, Forest County, Wisconsin, was simulated by expanding and recalibrating a previously calibrated regional model. The existing model was updated using newly collected water-level measurements, inclusion of surface-water features beyond the previous near-field boundary, and refinements to surface-water features. The updated model then was used to calculate the area contributing recharge for seven existing and three proposed pumping locations on lands of the Forest County Potawatomi Community. The existing wells were the subject of a 2004 source-water evaluation in which areas contributing recharge were calculated using the fixed-radius method. The motivation for the present (2012) project was to improve the level of detail of areas contributing recharge for the existing wells and to provide similar analysis for the proposed wells. Delineated 5- and 10-year areas contributing recharge for existing and proposed wells extend from the areas of pumping to delineate the area at the surface contributing recharge to the wells. Steady-state pumping was simulated for two scenarios: a base-pumping scenario using pumping rates that reflect what the Community currently (2012) pumps (or plans to in the case of proposed wells), and a high-pumping scenario in which the rate was set to the maximum expected from wells installed in this area, according to the Forest County Potawatomi Community Natural Resources Department. In general, the 10-year areas contributing recharge did not intersect surface-water bodies. The 5- and 10-year areas contributing recharge simulated at the maximum pumping rate at Bug Lake Road may intersect Bug Lake. At the casino near the Town of Carter, Wisconsin, the 10-year areas contributing recharge intersect infiltration ponds. At the Devils Lake and Lois Crow Drive wells, areas contributing recharge are near cultural features, including residences.

  3. Enhancement of COPD biological networks using a web-based collaboration interface

    PubMed Central

    Boue, Stephanie; Fields, Brett; Hoeng, Julia; Park, Jennifer; Peitsch, Manuel C.; Schlage, Walter K.; Talikka, Marja; Binenbaum, Ilona; Bondarenko, Vladimir; Bulgakov, Oleg V.; Cherkasova, Vera; Diaz-Diaz, Norberto; Fedorova, Larisa; Guryanova, Svetlana; Guzova, Julia; Igorevna Koroleva, Galina; Kozhemyakina, Elena; Kumar, Rahul; Lavid, Noa; Lu, Qingxian; Menon, Swapna; Ouliel, Yael; Peterson, Samantha C.; Prokhorov, Alexander; Sanders, Edward; Schrier, Sarah; Schwaitzer Neta, Golan; Shvydchenko, Irina; Tallam, Aravind; Villa-Fombuena, Gema; Wu, John; Yudkevich, Ilya; Zelikman, Mariya

    2015-01-01

    The construction and application of biological network models is an approach that offers a holistic way to understand biological processes involved in disease. Chronic obstructive pulmonary disease (COPD) is a progressive inflammatory disease of the airways for which therapeutic options currently are limited after diagnosis, even in its earliest stage. COPD network models are important tools to better understand the biological components and processes underlying initial disease development. With the increasing amounts of literature that are now available, crowdsourcing approaches offer new forms of collaboration for researchers to review biological findings, which can be applied to the construction and verification of complex biological networks. We report the construction of 50 biological network models relevant to lung biology and early COPD using an integrative systems biology and collaborative crowd-verification approach. By combining traditional literature curation with a data-driven approach that predicts molecular activities from transcriptomics data, we constructed an initial COPD network model set based on a previously published non-diseased lung-relevant model set. The crowd was given the opportunity to enhance and refine the networks on a website ( https://bionet.sbvimprover.com/) and to add mechanistic detail, as well as critically review existing evidence and evidence added by other users, so as to enhance the accuracy of the biological representation of the processes captured in the networks. Finally, scientists and experts in the field discussed and refined the networks during an in-person jamboree meeting. Here, we describe examples of the changes made to three of these networks: Neutrophil Signaling, Macrophage Signaling, and Th1-Th2 Signaling. We describe an innovative approach to biological network construction that combines literature and data mining and a crowdsourcing approach to generate a comprehensive set of COPD-relevant models that can be used to help understand the mechanisms related to lung pathobiology. Registered users of the website can freely browse and download the networks. PMID:25767696

  4. Enhancement of COPD biological networks using a web-based collaboration interface.

    PubMed

    Boue, Stephanie; Fields, Brett; Hoeng, Julia; Park, Jennifer; Peitsch, Manuel C; Schlage, Walter K; Talikka, Marja; Binenbaum, Ilona; Bondarenko, Vladimir; Bulgakov, Oleg V; Cherkasova, Vera; Diaz-Diaz, Norberto; Fedorova, Larisa; Guryanova, Svetlana; Guzova, Julia; Igorevna Koroleva, Galina; Kozhemyakina, Elena; Kumar, Rahul; Lavid, Noa; Lu, Qingxian; Menon, Swapna; Ouliel, Yael; Peterson, Samantha C; Prokhorov, Alexander; Sanders, Edward; Schrier, Sarah; Schwaitzer Neta, Golan; Shvydchenko, Irina; Tallam, Aravind; Villa-Fombuena, Gema; Wu, John; Yudkevich, Ilya; Zelikman, Mariya

    2015-01-01

    The construction and application of biological network models is an approach that offers a holistic way to understand biological processes involved in disease. Chronic obstructive pulmonary disease (COPD) is a progressive inflammatory disease of the airways for which therapeutic options currently are limited after diagnosis, even in its earliest stage. COPD network models are important tools to better understand the biological components and processes underlying initial disease development. With the increasing amounts of literature that are now available, crowdsourcing approaches offer new forms of collaboration for researchers to review biological findings, which can be applied to the construction and verification of complex biological networks. We report the construction of 50 biological network models relevant to lung biology and early COPD using an integrative systems biology and collaborative crowd-verification approach. By combining traditional literature curation with a data-driven approach that predicts molecular activities from transcriptomics data, we constructed an initial COPD network model set based on a previously published non-diseased lung-relevant model set. The crowd was given the opportunity to enhance and refine the networks on a website ( https://bionet.sbvimprover.com/) and to add mechanistic detail, as well as critically review existing evidence and evidence added by other users, so as to enhance the accuracy of the biological representation of the processes captured in the networks. Finally, scientists and experts in the field discussed and refined the networks during an in-person jamboree meeting. Here, we describe examples of the changes made to three of these networks: Neutrophil Signaling, Macrophage Signaling, and Th1-Th2 Signaling. We describe an innovative approach to biological network construction that combines literature and data mining and a crowdsourcing approach to generate a comprehensive set of COPD-relevant models that can be used to help understand the mechanisms related to lung pathobiology. Registered users of the website can freely browse and download the networks.

  5. GRACE gravity field recovery using refined acceleration approach

    NASA Astrophysics Data System (ADS)

    Li, Zhao; van Dam, Tonie; Weigelt, Matthias

    2017-04-01

    Since 2002, the GRACE mission has yielded monthly gravity field solutions with such a high level of quality that we have been able to observe so many changes to the Earth mass system. Based on GRACE L1B observations, a number of official monthly gravity field models have been developed and published using different methods, e.g. the CSR RL05, JPL RL05, and GFZ RL05 are being computed by a dynamic approach, the ITSG and Tongji GRACE are generated using what is known as the short-arc approach, the AIUB models are computed using celestial mechanics approach, and the DMT-1 model is calculated by means of an acceleration approach. Different from the DMT-1 model, which links the gravity field parameters directly to the bias-corrected range measurements at three adjacent epochs, in this work we present an alternative acceleration approach which connects range accelerations and velocity differences to the gradient of the gravitational potential. Due to the fact that GPS derived velocity difference is provided at a lower precision, we must reduce this approach to residual quantities using an a priori gravity field which allows us to subsequently neglect the residual velocity difference term. We find that this assumption would cause a problem in the low-degree gravity field coefficient, particularly for degree 2 and also from degree 16 to 26. To solve this problem, we present a new way of handling the residual velocity difference term, that is to treat this residual velocity difference term as unknown but estimable quantity, as it depends on the unknown residual gravity field parameters and initial conditions. In other word, we regard the kinematic orbit position vectors as pseudo observations, and the corrections of orbits are estimated together with both the geopotential coefficients and the accelerometer scale/bias by using a weighted least square adjustment. The new approach is therefore a refinement of the existing approach but offers a better approximation to reality. This result is especially important in view of the upcoming GRACE Follow-On mission, which will be equipped with a laser ranging instrument offering a higher precision. Our validation results show that this refined acceleration approach could produce monthly GRACE gravity solutions at the same level of precision as the other approaches.

  6. The development and refinement of models of less established and more established high school environmental service-learning programs in Florida

    NASA Astrophysics Data System (ADS)

    Malikova, Yuliya

    2005-07-01

    Environmental Service-Learning (Env. S-L) appears to show great promise and practitioners tout its benefits, although there have been fewer than ten studies in this emerging area of environmental education. The overall study purpose was to describe the nature, status, and effects of Grade 9--16 Env. S-L programs in Florida, and develop descriptive models of those programs. The purpose of Phase I was to describe these programs and associated partnerships. Based on Phase I results, the purpose of Phase II was to develop, compare, and refine models for less and more established high school programs. This study involved: (1) defining the population of Florida 9--16 Env. S-L programs (Phase I); (2) developing and administering program surveys (Phase I, quantitative); (3) analyzing Phase I survey data and identifications of options for Phase II (Intermediate stage); (4) designing and implementing methodology for further data collection (Phase II, qualitative); (5) refining and finalizing program models (Phase II, descriptive); and (6) summarizing program data, changes, and comparisons. This study revealed that Env. S-L has been practiced in a variety of ways at the high school and college levels in Florida. There, the number of high school programs, and participating teachers and students has been growing. Among others, major program features include block scheduling, indirect S-L activities, external funding sources, and formal and ongoing community partnerships. Findings based on self-reported program assessment results indicate that S-L has had positive effects on students across Furco's S-L outcome domains (i.e., academic achievement/success, school participation/behavior, carrier development, personal development, interpersonal development, ethical/moral development, and development of civic responsibility). Differences existed between less established and more established Env. S-L programs. Less established programs had relatively few participating teachers, courses, projects, community partners, and service sites. Most S-L activities were offered as electives. Lead teachers used reflection to integrate academic learning with service experience to a moderate extent. More established programs had a larger number of participating teachers, courses, projects, community partners, partner representatives, and service sites. Students were consistently engaged in multiple forms of reflection. These teachers also practiced S-L before their exposure to the wider field of S-L.

  7. The active site of hen egg-white lysozyme: flexibility and chemical bonding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Held, Jeanette, E-mail: jeanette.netzel@uni-bayreuth.de; Smaalen, Sander van

    Chemical bonding at the active site of lysozyme is analyzed on the basis of a multipole model employing transferable multipole parameters from a database. Large B factors at low temperatures reflect frozen-in disorder, but therefore prevent a meaningful free refinement of multipole parameters. Chemical bonding at the active site of hen egg-white lysozyme (HEWL) is analyzed on the basis of Bader’s quantum theory of atoms in molecules [QTAIM; Bader (1994 ▶), Atoms in Molecules: A Quantum Theory. Oxford University Press] applied to electron-density maps derived from a multipole model. The observation is made that the atomic displacement parameters (ADPs) ofmore » HEWL at a temperature of 100 K are larger than ADPs in crystals of small biological molecules at 298 K. This feature shows that the ADPs in the cold crystals of HEWL reflect frozen-in disorder rather than thermal vibrations of the atoms. Directly generalizing the results of multipole studies on small-molecule crystals, the important consequence for electron-density analysis of protein crystals is that multipole parameters cannot be independently varied in a meaningful way in structure refinements. Instead, a multipole model for HEWL has been developed by refinement of atomic coordinates and ADPs against the X-ray diffraction data of Wang and coworkers [Wang et al. (2007), Acta Cryst. D63, 1254–1268], while multipole parameters were fixed to the values for transferable multipole parameters from the ELMAM2 database [Domagala et al. (2012), Acta Cryst. A68, 337–351] . Static and dynamic electron densities based on this multipole model are presented. Analysis of their topological properties according to the QTAIM shows that the covalent bonds possess similar properties to the covalent bonds of small molecules. Hydrogen bonds of intermediate strength are identified for the Glu35 and Asp52 residues, which are considered to be essential parts of the active site of HEWL. Furthermore, a series of weak C—H⋯O hydrogen bonds are identified by means of the existence of bond critical points (BCPs) in the multipole electron density. It is proposed that these weak interactions might be important for defining the tertiary structure and activity of HEWL. The deprotonated state of Glu35 prevents a distinction between the Phillips and Koshland mechanisms.« less

  8. Protein homology model refinement by large-scale energy optimization.

    PubMed

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  9. DPW-VI Results Using FUN3D with Focus on k-kL-MEAH2015 (k-kL) Turbulence Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, K. S.; Carlson, Jan-Renee; Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Park, Michael A.

    2017-01-01

    The Common Research Model wing-body configuration is investigated with the k-kL-MEAH2015 turbulence model implemented in FUN3D. This includes results presented at the Sixth Drag Prediction Workshop and additional results generated after the workshop with a nonlinear Quadratic Constitutive Relation (QCR) variant of the same turbulence model. The workshop provided grids are used, and a uniform grid refinement study is performed at the design condition. A large variation between results with and without a reconstruction limiter is exhibited on "medium" grid sizes, indicating that the medium grid size is too coarse for drawing conclusions in comparison with experiment. This variation is reduced with grid refinement. At a fixed angle of attack near design conditions, the QCR variant yielded decreased lift and drag compared with the linear eddy-viscosity model by an amount that was approximately constant with grid refinement. The k-kL-MEAH2015 turbulence model produced wing root junction flow behavior consistent with wind tunnel observations.

  10. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    PubMed

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Understanding the Patient Perspective of Seizure Severity in Epilepsy: Development of a Conceptual Model.

    PubMed

    Borghs, Simon; Tomaszewski, Erin L; Halling, Katarina; de la Loge, Christine

    2016-10-01

    For patients with uncontrolled epilepsy, the severity and postictal sequelae of seizures might be more impactful than their frequency. Seizure severity is often assessed using patient-reported outcome (PRO) instruments; however, evidence of content validity for existing instruments is lacking. Our aim was to understand the real-life experiences of patients with uncontrolled epilepsy. A preliminary conceptual model was developed. The model was refined through (1) a targeted literature review of qualitative research on seizure severity; (2) interviews with four clinical epilepsy experts to evaluate identified concepts; and (3) qualitative interviews with patients with uncontrolled epilepsy, gathering descriptions of symptoms and impacts of epilepsy, focusing on how patients experience and describe "seizure severity." Findings were summarized in a final conceptual model of seizure severity in epilepsy. Twenty-five patients (12 who experienced primary generalized tonic-clonic seizures and 13 who experienced partial-onset seizures) expressed 42 different symptoms and 26 different impacts related to seizures. The final conceptual model contained a wide range of concepts related to seizure frequency, symptoms, and duration. Our model identified several new concepts that characterize the patient experience of seizure severity. A seizure severity PRO instrument should cover a wide range of seizure symptoms alongside frequency and duration of seizures. This qualitative work reinforces the notion that measuring seizure frequency is insufficient and that seizure severity is important in defining the patient's experience of epilepsy. This model could be used to assess the content validity of existing PRO instruments, or could support the development of a new one.

  12. Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"

    EPA Science Inventory

    Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...

  13. Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement

    NASA Astrophysics Data System (ADS)

    Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano

    2018-03-01

    The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.

  14. Formation and mechanism of nanocrystalline AZ91 powders during HDDR processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yafen; Fan, Jianfeng, E-mail: fanjianfeng@tyu

    2017-03-15

    Grain sizes of AZ91 alloy powders were markedly refined to about 15 nm from 100 to 160 μm by an optimized hydrogenation-disproportionation-desorption-recombination (HDDR) process. The effect of temperature, hydrogen pressure and processing time on phase and microstructure evolution of AZ91 alloy powders during HDDR process was investigated systematically by X-ray diffraction, optical microscopy, scanning electron microscopy and transmission electron microscopy, respectively. The optimal HDDR process for preparing nanocrystalline Mg alloy powders is hydriding at temperature of 350 °C under 4 MPa hydrogen pressure for 12 h and dehydriding at 350 °C for 3 h in vacuum. A modified unreacted coremore » model was introduced to describe the mechanism of grain refinement of during HDDR process. - Highlights: • Grain size of the AZ91 alloy powders was significantly refined from 100 μm to 15 nm. • The optimal HDDR technology for nano Mg alloy powders is obtained. • A modified unreacted core model of grain refinement mechanism was proposed.« less

  15. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  16. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    NASA Astrophysics Data System (ADS)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  17. Land Ecology Essay I: The siren song of the finish line

    USDA-ARS?s Scientific Manuscript database

    As the National Cooperative Soils Survey nears the completion of initial mapping and description activities, the options for next steps are being considered. One option is to deploy new and emerging mapping technologies for existing and refined concepts of soil behavior to create more precise maps ...

  18. About the Cancer Biomarkers Research Group | Division of Cancer Prevention

    Cancer.gov

    The Cancer Biomarkers Research Group promotes research to identify, develop, and validate biological markers for early cancer detection and cancer risk assessment. Activities include development and validation of promising cancer biomarkers, collaborative databases and informatics systems, and new technologies or the refinement of existing technologies. NCI DCP News Note

  19. Optimizing Your K-5 Engineering Design Challenge

    ERIC Educational Resources Information Center

    Coppola, Matthew Perkins; Merz, Alice H.

    2017-01-01

    Today, elementary school teachers continue to revisit old lessons and seek out new ones, especially in engineering. Optimization is the process by which an existing product or procedure is revised and refined. Drawn from the authors' experiences working directly with students in grades K-5 and their teachers and preservice teachers, the…

  20. 40 CFR 35.380 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... government agencies under section 104(b)(3) of the Act. These sections do not govern wetlands development... Grants to assist in the development of new, or refinement of existing, wetlands protection and management... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Purpose. 35.380 Section 35.380...

  1. 40 CFR 35.380 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... government agencies under section 104(b)(3) of the Act. These sections do not govern wetlands development... Grants to assist in the development of new, or refinement of existing, wetlands protection and management... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Purpose. 35.380 Section 35.380...

  2. 40 CFR 35.380 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... government agencies under section 104(b)(3) of the Act. These sections do not govern wetlands development... Grants to assist in the development of new, or refinement of existing, wetlands protection and management... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Purpose. 35.380 Section 35.380...

  3. 40 CFR 35.380 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... government agencies under section 104(b)(3) of the Act. These sections do not govern wetlands development... Grants to assist in the development of new, or refinement of existing, wetlands protection and management... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Purpose. 35.380 Section 35.380...

  4. 40 CFR 35.380 - Purpose.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... government agencies under section 104(b)(3) of the Act. These sections do not govern wetlands development... Grants to assist in the development of new, or refinement of existing, wetlands protection and management... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Purpose. 35.380 Section 35.380...

  5. Strengthening and Refining the Federal-State-Institutional Partnership.

    ERIC Educational Resources Information Center

    Merisotis, Jamie P.

    1991-01-01

    A strengthened student aid partnership between the federal and state governments and colleges needs to use existing funds more efficiently, regulate how students receive aid more effectively, and delineate the rights and responsibilities of each of the major partners more adequately. Better cooperation would benefit taxpayers, institutions, and…

  6. Advancing Measurement of Work and Family Domain Boundary Characteristics

    ERIC Educational Resources Information Center

    Matthews, Russell A.; Barnes-Farrell, Janet L.; Bulger, Carrie A.

    2010-01-01

    Recent research offers promising theoretical frameworks for thinking about the work-family interface in terms of the boundaries individuals develop around work and family. However, measures for important constructs proposed by these theories are needed. Using two independent samples, we report on the refinement of existing "boundary flexibility"…

  7. 7 CFR 762.142 - Servicing related to collateral.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... restructuring; (iii) When the lender requesting the guarantee is refinancing the debt of another lender and the... insurance loss payments, condemnation awards, or similar proceeds are applied on debts in accordance with... refinance an existing prior lien, no additional debt is being incurred, and the lender's security position...

  8. 7 CFR 762.142 - Servicing related to collateral.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... restructuring; (iii) When the lender requesting the guarantee is refinancing the debt of another lender and the... insurance loss payments, condemnation awards, or similar proceeds are applied on debts in accordance with... refinance an existing prior lien, no additional debt is being incurred, and the lender's security position...

  9. 7 CFR 762.142 - Servicing related to collateral.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... restructuring; (iii) When the lender requesting the guarantee is refinancing the debt of another lender and the... insurance loss payments, condemnation awards, or similar proceeds are applied on debts in accordance with... refinance an existing prior lien, no additional debt is being incurred, and the lender's security position...

  10. 7 CFR 762.142 - Servicing related to collateral.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... restructuring; (iii) When the lender requesting the guarantee is refinancing the debt of another lender and the... insurance loss payments, condemnation awards, or similar proceeds are applied on debts in accordance with... refinance an existing prior lien, no additional debt is being incurred, and the lender's security position...

  11. Give Us This Day Our Daily Breadth

    ERIC Educational Resources Information Center

    Duncan, Greg J.

    2012-01-01

    As with any discipline, the field of child development progresses by both deepening and broadening its conceptual and empirical perspective. The rewards to refinement are impressive, but there is little need for encouragement in this area, since existing disciplines, universities, and funding agencies reward depth. The current study makes the case…

  12. Traditional Occupational Analysis and Contemporary CBVE Instruction.

    ERIC Educational Resources Information Center

    Duenk, Lester G.

    Trade and industrial educators were pioneers in the development and practice of occupational analysis as utilized in curriculum development and improvement. In the 1940s, Verne C. Fryklund refined the existing system of occupational analysis and introduced it into public school industrial arts educational programs. The basic concept of Fryklund's…

  13. Western juniper drying project summaries, 1993-96.

    Treesearch

    Scott Leavengood; Larry Swan

    1999-01-01

    Drying tests and trials for western juniper (Juniperus occidentalis Hook.) were conducted between 1993 and 1996 to (1) test and refine existing dry kiln schedules; (2) develop moisture meter correction factors; (3) test dry western juniper in different types of kilns, both by itself and with ponderosa pine (Pinus ponderosa...

  14. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.

  15. Musical emotions: Functions, origins, evolution

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in psychological and neuroimaging research are reviewed.

  16. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  17. Formation of Polyphenol-Denatured Protein Flocs in Alcohol Beverages Sweetened with Refined Cane Sugars.

    PubMed

    Eggleston, Gillian; Triplett, Alexa

    2017-11-08

    The sporadic appearance of floc from refined, white cane sugars in alcohol beverages remains a technical problem for both beverage manufacturers and sugar refiners. Cane invert sugars mixed with 60% pure alcohol and water increased light scattering by up to ∼1000-fold. Insoluble and soluble starch, fat, inorganic ash, oligosaccharides, Brix, and pH were not involved in the prevailing floc-formation mechanism. Strong polynomial correlations existed between the haze floc and indicator values (IVs) (color at 420 nm pH 9.0/color at pH 4.0-an indirect measure of polyphenolic and flavonoid colorants) (R 2 = 0.815) and protein (R 2 = 0.819) content of the invert sugars. Ethanol-induced denaturation of the protein exposed hydrophobic polyphenol-binding sites that were further exposed when heated to 80 °C. A tentative mechanism for floc formation was advanced by molecular probing with a haze (floc) active protein and polyphenol as well as polar, nonpolar, and ionic solvents.

  18. Parallel, but Dissociable, Processing in Discrete Corticostriatal Inputs Encodes Skill Learning.

    PubMed

    Kupferschmidt, David A; Juczewski, Konrad; Cui, Guohong; Johnson, Kari A; Lovinger, David M

    2017-10-11

    Changes in cortical and striatal function underlie the transition from novel actions to refined motor skills. How discrete, anatomically defined corticostriatal projections function in vivo to encode skill learning remains unclear. Using novel fiber photometry approaches to assess real-time activity of associative inputs from medial prefrontal cortex to dorsomedial striatum and sensorimotor inputs from motor cortex to dorsolateral striatum, we show that associative and sensorimotor inputs co-engage early in action learning and disengage in a dissociable manner as actions are refined. Disengagement of associative, but not sensorimotor, inputs predicts individual differences in subsequent skill learning. Divergent somatic and presynaptic engagement in both projections during early action learning suggests potential learning-related in vivo modulation of presynaptic corticostriatal function. These findings reveal parallel processing within associative and sensorimotor circuits that challenges and refines existing views of corticostriatal function and expose neuronal projection- and compartment-specific activity dynamics that encode and predict action learning. Published by Elsevier Inc.

  19. Scheduler for monitoring objects orbiting earth using satellite-based telescopes

    DOEpatents

    Olivier, Scot S; Pertica, Alexander J; Riot, Vincent J; De Vries, Willem H; Bauman, Brian J; Nikolaev, Sergei; Henderson, John R; Phillion, Donald W

    2015-04-28

    An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.

  20. Monitoring objects orbiting earth using satellite-based telescopes

    DOEpatents

    Olivier, Scot S.; Pertica, Alexander J.; Riot, Vincent J.; De Vries, Willem H.; Bauman, Brian J.; Nikolaev, Sergei; Henderson, John R.; Phillion, Donald W.

    2015-06-30

    An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.

  1. Recent Updates to the Arnold Mirror Modeler and Integration into the Evolving NASA Overall Design System for Large Space-Based Optical Systems

    NASA Technical Reports Server (NTRS)

    Arnold, William R.

    2015-01-01

    Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.

  2. Structural analysis of glycoproteins: building N-linked glycans with Coot.

    PubMed

    Emsley, Paul; Crispin, Max

    2018-04-01

    Coot is a graphics application that is used to build or manipulate macromolecular models; its particular forte is manipulation of the model at the residue level. The model-building tools of Coot have been combined and extended to assist or automate the building of N-linked glycans. The model is built by the addition of monosaccharides, placed by variation of internal coordinates. The subsequent model is refined by real-space refinement, which is stabilized with modified and additional restraints. It is hoped that these enhanced building tools will help to reduce building errors of N-linked glycans and improve our knowledge of the structures of glycoproteins.

  3. Recent Updates to the Arnold Mirror Modeler and Integration into the Evolving NASA Overall Design System for Large Space Based Optical Systems

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.

    2015-01-01

    Since last year, a number of expanded capabilities have been added to the modeler. The support the integration with thermal modeling, the program can now produce simplified thermal models with the same geometric parameters as the more detailed dynamic and even more refined stress models. The local mesh refinement and mesh improvement tools have been expanded and more user friendly. The goal is to provide a means of evaluating both monolithic and segmented mirrors to the same level of fidelity and loading conditions at reasonable man-power efforts. The paper will demonstrate most of these new capabilities.

  4. Transit Photometry of Recently Discovered Hot Jupiters

    NASA Astrophysics Data System (ADS)

    McCloat, Sean Peter

    The University of North Dakota Space Studies Internet Observatory was used to observe the transits of hot Jupiter exoplanets. Targets for this research were selected from the list of currently confirmed exoplanets using the following criteria: radius > 0.5 Rjup, discovered since 2011, orbiting stars with apparent magnitude > 13. Eleven transits were observed distributed across nine targets with the goal of performing differential photometry for parameter refinement and transit timing variation analysis if data quality allowed. Data quality was ultimately insufficient for robust parameter refinement, but tentative calculations of mid-transit times were made of three of the observed transits. Mid-transit times for WASP-103b and WASP-48b were consistent with predictions and the existing database.

  5. Meteorological regimes for the classification of aerospace air quality predictions for NASA-Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Stephens, J. B.; Sloan, J. C.

    1976-01-01

    A method is described for developing a statistical air quality assessment for the launch of an aerospace vehicle from the Kennedy Space Center in terms of existing climatological data sets. The procedure can be refined as developing meteorological conditions are identified for use with the NASA-Marshall Space Flight Center Rocket Exhaust Effluent Diffusion (REED) description. Classical climatological regimes for the long range analysis can be narrowed as the synoptic and mesoscale structure is identified. Only broad synoptic regimes are identified at this stage of analysis. As the statistical data matrix is developed, synoptic regimes will be refined in terms of the resulting eigenvectors as applicable to aerospace air quality predictions.

  6. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  7. Assessing food allergy risks from residual peanut protein in highly refined vegetable oil.

    PubMed

    Blom, W Marty; Kruizinga, Astrid G; Rubingh, Carina M; Remington, Ben C; Crevel, René W R; Houben, Geert F

    2017-08-01

    Refined vegetable oils including refined peanut oil are widely used in foods. Due to shared production processes, refined non-peanut vegetable oils can contain residual peanut proteins. We estimated the predicted number of allergic reactions to residual peanut proteins using probabilistic risk assessment applied to several scenarios involving food products made with vegetable oils. Variables considered were: a) the estimated production scale of refined peanut oil, b) estimated cross-contact between refined vegetable oils during production, c) the proportion of fat in representative food products and d) the peanut protein concentration in refined peanut oil. For all products examined the predicted risk of objective allergic reactions in peanut-allergic users of the food products was extremely low. The number of predicted reactions ranged depending on the model from a high of 3 per 1000 eating occasions (Weibull) to no reactions (LogNormal). Significantly, all reactions were predicted for allergen intakes well below the amounts reported for the most sensitive individual described in the clinical literature. We conclude that the health risk from cross-contact between vegetable oils and refined peanut oil is negligible. None of the food products would warrant precautionary labelling for peanut according to the VITAL ® programme of the Allergen Bureau. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Theory of a refined earth model

    NASA Technical Reports Server (NTRS)

    Krause, H. G. L.

    1968-01-01

    Refined equations are derived relating the variations of the earths gravity and radius as functions of longitude and latitude. They particularly relate the oblateness coefficients of the old harmonics and the difference of the polar radii /respectively, ellipticities and polar gravity accelerations/ in the Northern and Southern Hemispheres.

  9. Refining King and Baxter Magolda's Model of Intercultural Maturity

    ERIC Educational Resources Information Center

    Perez, Rosemary J.; Shim, Woojeong; King, Patricia M.; Baxter Magolda, Marcia B.

    2015-01-01

    This study examined 110 intercultural experiences from 82 students attending six colleges and universities to explore how students' interpretations of their intercultural experiences reflected their developmental capacities for intercultural maturity. Our analysis of students' experiences confirmed as well as refined and expanded King and Baxter…

  10. Overview: Application of heterogeneous nucleation in grain-refining of metals.

    PubMed

    Greer, A L

    2016-12-07

    In all of metallurgical processing, probably the most prominent example of nucleation control is the "inoculation" of melts to suppress columnar solidification and to obtain fine equiaxed grain structures in the as-cast solid. In inoculation, a master alloy is added to the melt to increase its solute content and to add stable particles that can act as nucleants for solid grains. This is important for alloys of many metals, and in other cases such as ice nucleation in living systems, but inoculation of aluminum alloys using Al-5Ti-1B (wt.%) master alloy is the exemplar. The key elements are (i) that the chemical interactions between nucleant TiB 2 particles and the melt ensure that the solid phase (α-Al) exists on the surface of the particles even above the liquidus temperature of the melt, (ii) that these perfect nucleants can initiate grains only when the barrier for free growth of α-Al is surmounted, and (iii) that (depending on whether the melt is spatially isothermal or not) the release of latent heat, or the limited extent of constitutional supercooling, can act to limit the number of grains that is initiated and therefore the degree of grain refinement that can be achieved. We review recent studies that contribute to better understanding, and improvement, of grain refinement in general. We also identify priorities for future research. These include the study of the effects of nanophase dispersions in melts. Preliminary studies show that such dispersions may be especially effective in achieving grain refinement, and raise many questions about the underlying mechanisms. The stimulation of icosahedral short-range ordering in the liquid has been shown to lead to grain refinement, and is a further priority for study, especially as the refinement can be achieved with only minor additions of solute.

  11. Overview: Application of heterogeneous nucleation in grain-refining of metals

    NASA Astrophysics Data System (ADS)

    Greer, A. L.

    2016-12-01

    In all of metallurgical processing, probably the most prominent example of nucleation control is the "inoculation" of melts to suppress columnar solidification and to obtain fine equiaxed grain structures in the as-cast solid. In inoculation, a master alloy is added to the melt to increase its solute content and to add stable particles that can act as nucleants for solid grains. This is important for alloys of many metals, and in other cases such as ice nucleation in living systems, but inoculation of aluminum alloys using Al-5Ti-1B (wt.%) master alloy is the exemplar. The key elements are (i) that the chemical interactions between nucleant TiB2 particles and the melt ensure that the solid phase (α-Al) exists on the surface of the particles even above the liquidus temperature of the melt, (ii) that these perfect nucleants can initiate grains only when the barrier for free growth of α-Al is surmounted, and (iii) that (depending on whether the melt is spatially isothermal or not) the release of latent heat, or the limited extent of constitutional supercooling, can act to limit the number of grains that is initiated and therefore the degree of grain refinement that can be achieved. We review recent studies that contribute to better understanding, and improvement, of grain refinement in general. We also identify priorities for future research. These include the study of the effects of nanophase dispersions in melts. Preliminary studies show that such dispersions may be especially effective in achieving grain refinement, and raise many questions about the underlying mechanisms. The stimulation of icosahedral short-range ordering in the liquid has been shown to lead to grain refinement, and is a further priority for study, especially as the refinement can be achieved with only minor additions of solute.

  12. Animal Models of Post-Traumatic Stress Disorder and Recent Neurobiological Insights

    PubMed Central

    Whitaker, Annie M.; Gilpin, Nicholas W.; Edwards, Scott

    2014-01-01

    Post-traumatic stress disorder (PTSD) is a complex psychiatric disorder characterized by the intrusive re-experiencing of past trauma, avoidant behavior, enhanced fear, and hyperarousal following a traumatic event in vulnerable populations. Preclinical animal models do not replicate the human condition in its entirety, but seek to mimic symptoms or endophenotypes associated with PTSD. Although many models of traumatic stress exist, few adequately capture the complex nature of the disorder and the observed individual variability in susceptibility of humans to develop PTSD. In addition, various types of stressors may produce different molecular neuroadaptations that likely contribute to the various behavioral disruptions produced by each model, although certain consistent neurobiological themes related to PTSD have emerged. For example, animal models report traumatic stress- and trauma reminder-induced alterations in neuronal activity in the amygdala and prefrontal cortex, in agreement with the human PTSD literature. Models have also provided a conceptual framework for the often observed combination of PTSD and co-morbid conditions such as alcohol use disorder (AUD). Future studies will continue to refine preclinical PTSD models in hopes of capitalizing on their potential to deliver new and more efficacious treatments for PTSD and associated psychiatric disorders. PMID:25083568

  13. Asphalt and risk of cancer in man.

    PubMed Central

    Chiazze, L; Watkins, D K; Amsel, J

    1991-01-01

    Epidemiological publications regarding the carcinogenic potential of asphalt (bitumen) are reviewed. In 1984 the International Agency for Research on Cancer (IARC) stated that there is "inadequate evidence that bitumens alone are carcinogenic to humans." They did, however, conclude that animal data provided sufficient evidence for the carcinogenicity of certain extracts of steam refined and air refined bitumens. In the absence of data on man, IARC considered it reasonable to regard chemicals with sufficient evidence of carcinogenicity in animals as if they presented a carcinogenic risk to man. Epidemiological data for man accumulated since the IARC report do not fulfil the criteria for showing a causal association between exposure to asphalt and development of cancer. The studies cited all suffer from a lack of data on exposure or potential confounders, which are necessary to establish whether or not such an association may or may not exist. In view of the evidence (or lack thereof) regarding asphalt today, an appropriate public health attitude suggests at least that action be taken to protect those working with asphalt by monitoring the workplace, taking whatever steps are possible to minimise exposures and to inform workers of potential hazards. At the same time, a need exists for well designed analytical epidemiological studies to determine whether a risk of cancer in man exists from exposure to asphalt. PMID:1878310

  14. The Hunt for Pristine Cretaceous Astronomical Rhythms at Demerara Rise (Cenomanian-Coniacian)

    NASA Astrophysics Data System (ADS)

    Ma, C.; Meyers, S. R.

    2014-12-01

    Rhythmic Upper Cretaceous strata from Demerara Rise (ODP leg 207) preserve a strong astronomical signature, and this attribute has facilitated the development of continuous astrochronologies to refine the geologic time scale and calibrate Late Cretaceous biogeochemical events. While the mere identification of astronomical rhythms is a crucial first step in many deep-time paleoceanographic investigations, accurate evaluation of often subtle amplitude and frequency modulations are required to: (1) robustly constrain the linkage between climate and sedimentation, and (2) evaluate the plausibility of different theoretical astrodynamical models. The availability of a wide range of geophysical, lithologic and geochemical data from multiple sites drilled at Demerara Rise - when coupled with recent innovations in the statistical analysis of cyclostratigraphic data - provides an opportunity to hunt for the most pristine record of Cretaceous astronomical rhythms at a tropical Atlantic location. To do so, a statistical metric is developed to evaluate the "internal" consistency of hypothesized astronomical rhythms observed in each data set, particularly with regard to the expected astronomical amplitude modulations. In this presentation, we focus on how the new analysis yields refinements to the existing astrochronologies, provides constraints on the linkages between climate and sedimentation (including the deposition of organic carbon-rich sediments at Demerara Rise), and allows a quantitative evaluation of the continuity of deposition across sites at multiple temporal scales.

  15. PaFlexPepDock: parallel ab-initio docking of peptides onto their receptors with full flexibility based on Rosetta.

    PubMed

    Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang

    2014-01-01

    Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.

  16. The Stigma Resistance Scale: A multi-sample validation of a new instrument to assess mental illness stigma resistance.

    PubMed

    Firmin, Ruth L; Lysaker, Paul H; McGrew, John H; Minor, Kyle S; Luther, Lauren; Salyers, Michelle P

    2017-12-01

    Although associated with key recovery outcomes, stigma resistance remains under-studied largely due to limitations of existing measures. This study developed and validated a new measure of stigma resistance. Preliminary items, derived from qualitative interviews of people with lived experience, were pilot tested online with people self-reporting a mental illness diagnosis (n = 489). Best performing items were selected, and the refined measure was administered to an independent sample of people with mental illness at two state mental health consumer recovery conferences (n = 202). Confirmatory factor analyses (CFA) guided by theory were used to test item fit, correlations between the refined stigma resistance measure and theoretically relevant measures were examined for validity, and test-retest correlations of a subsample were examined for stability. CFA demonstrated strong fit for a 5-factor model. The final 20-item measure demonstrated good internal consistency for each of the 5 subscales, adequate test-retest reliability at 3 weeks, and strong construct validity (i.e., positive associations with quality of life, recovery, and self-efficacy, and negative associations with overall symptoms, defeatist beliefs, and self-stigma). The new measure offers a more reliable and nuanced assessment of stigma resistance. It may afford greater personalization of interventions targeting stigma resistance. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Diffuse interface simulation of bubble rising process: a comparison of adaptive mesh refinement and arbitrary lagrange-euler methods

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Cai, Jiejin; Li, Qiong; Yin, Huaqiang; Yang, Xingtuan

    2018-06-01

    Gas-liquid two phase flow exists in several industrial processes and light-water reactors (LWRs). A diffuse interface based finite element method with two different mesh generation methods namely, the Adaptive Mesh Refinement (AMR) and the Arbitrary Lagrange Euler (ALE) methods is used to model the shape and velocity changes in a rising bubble. Moreover, the calculating speed and mesh generation strategies of AMR and ALE are contrasted. The simulation results agree with the Bhagat's experiments, indicating that both mesh generation methods can simulate the characteristics of bubble accurately. We concluded that: the small bubble rises as elliptical with oscillation, whereas a larger bubble (11 mm > d > 7 mm) rises with a morphology between the elliptical and cap type with a larger oscillation. When the bubble is large (d > 11 mm), it rises up as a cap type, and the amplitude becomes smaller. Moreover, it takes longer to achieve the stable shape from the ellipsoid to the spherical cap type with the increase of the bubble diameter. The results also show that for smaller diameter case, the ALE method uses fewer grids and has a faster calculation speed, but the AMR method can solve the case of a large geometry deformation efficiently.

  18. Evaluating existing access opportunities for disabled persons at remote shoreline recreation sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, M.R.; Kearns, M.T.

    1995-12-31

    Draft guidelines for providing outdoor recreation access opportunities for disabled persons have been recommended by the Recreation Access Advisory Committee and in the Universal Access to Outdoor Recreation: A Design Guide. The Federal Energy Regulatory Commission requires applicants for new hydropower licenses to consider access opportunities for disabled persons at existing hydropower projects. A process for evaluating existing access opportunities for disabled persons at remote shoreline recreation sites at hydropower projects is described. The process includes five steps: (1) preparing a preliminary map of existing recreation sites; (2) data collection in the field; (3) evaluating compliance of existing facilities; (4)more » feasibility of enhancing existing facilities; and (5) designing enhancements. The process will be refined when final standards and processes are approved by the appropriate agencies and organizations.« less

  19. A Refined Zigzag Beam Theory for Composite and Sandwich Beams

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Sciuva, Marco Di; Gherlone, Marco

    2009-01-01

    A new refined theory for laminated composite and sandwich beams that contains the kinematics of the Timoshenko Beam Theory as a proper baseline subset is presented. This variationally consistent theory is derived from the virtual work principle and employs a novel piecewise linear zigzag function that provides a more realistic representation of the deformation states of transverse-shear flexible beams than other similar theories. This new zigzag function is unique in that it vanishes at the top and bottom bounding surfaces of a beam. The formulation does not enforce continuity of the transverse shear stress across the beam s cross-section, yet is robust. Two major shortcomings that are inherent in the previous zigzag theories, shear-force inconsistency and difficulties in simulating clamped boundary conditions, and that have greatly limited the utility of these previous theories are discussed in detail. An approach that has successfully resolved these shortcomings is presented herein. Exact solutions for simply supported and cantilevered beams subjected to static loads are derived and the improved modelling capability of the new zigzag beam theory is demonstrated. In particular, extensive results for thick beams with highly heterogeneous material lay-ups are discussed and compared with corresponding results obtained from elasticity solutions, two other zigzag theories, and high-fidelity finite element analyses. Comparisons with the baseline Timoshenko Beam Theory are also presented. The comparisons clearly show the improved accuracy of the new, refined zigzag theory presented herein over similar existing theories. This new theory can be readily extended to plate and shell structures, and should be useful for obtaining relatively low-cost, accurate estimates of structural response needed to design an important class of high-performance aerospace structures.

  20. Modeling as an Anchoring Scientific Practice for Explaining Friction Phenomena

    NASA Astrophysics Data System (ADS)

    Neilson, Drew; Campbell, Todd

    2017-12-01

    Through examining the day-to-day work of scientists, researchers in science studies have revealed how models are a central sense-making practice of scientists as they construct and critique explanations about how the universe works. Additionally, they allow predictions to be made using the tenets of the model. Given this, alongside research suggesting that engaging students in developing and using models can have a positive effect on learning in science classrooms, the recent national standards documents in science education have identified developing and using models as an important practice students should engage in as they apply and refine their ideas with peers and teachers in explaining phenomena or solving problems in classrooms. This article details how students can be engaged in developing and using models to help them make sense of friction phenomena in a high school conceptual physics classroom in ways that align with visions for teaching and learning outlined in the Next Generation Science Standards. This particular unit has been refined over several years to build on what was initially an inquiry-based unit we have described previously. In this latest iteration of the friction unit, students developed and refined models through engaging in small group and whole class discussions and investigations.

  1. Rosetta Structure Prediction as a Tool for Solving Difficult Molecular Replacement Problems.

    PubMed

    DiMaio, Frank

    2017-01-01

    Molecular replacement (MR), a method for solving the crystallographic phase problem using phases derived from a model of the target structure, has proven extremely valuable, accounting for the vast majority of structures solved by X-ray crystallography. However, when the resolution of data is low, or the starting model is very dissimilar to the target protein, solving structures via molecular replacement may be very challenging. In recent years, protein structure prediction methodology has emerged as a powerful tool in model building and model refinement for difficult molecular replacement problems. This chapter describes some of the tools available in Rosetta for model building and model refinement specifically geared toward difficult molecular replacement cases.

  2. One-dimensional modelling of upper ocean mixing by turbulence due to wave orbital motion

    NASA Astrophysics Data System (ADS)

    Ghantous, M.; Babanin, A. V.

    2014-02-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into ocean mixing models, and real gains have been made in terms of increased fidelity to observational data. However, our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing parameterisations need refinement and propose an alternative one. We use two of the parameterisations to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  3. The Galactic Isotropic γ-ray Background and Implications for Dark Matter

    NASA Astrophysics Data System (ADS)

    Campbell, Sheldon S.; Kwa, Anna; Kaplinghat, Manoj

    2018-06-01

    We present an analysis of the radial angular profile of the galacto-isotropic (GI) γ-ray flux-the statistically uniform flux in angular annuli centred on the Galactic centre. Two different approaches are used to measure the GI flux profile in 85 months of Fermi-LAT data: the BDS statistical method which identifies spatial correlations, and a new Poisson ordered-pixel method which identifies non-Poisson contributions. Both methods produce similar GI flux profiles. The GI flux profile is well-described by an existing model of bremsstrahlung, π0 production, inverse Compton scattering, and the isotropic background. Discrepancies with data in our full-sky model are not present in the GI component, and are therefore due to mis-modelling of the non-GI emission. Dark matter annihilation constraints based solely on the observed GI profile are close to the thermal WIMP cross section below 100 GeV, for fixed models of the dark matter density profile and astrophysical γ-ray foregrounds. Refined measurements of the GI profile are expected to improve these constraints by a factor of a few.

  4. Preliminary constraints on variable w dark energy cosmologies from the SNLS

    NASA Astrophysics Data System (ADS)

    Carlberg, R. G.; Conley, A.; Howell, D. A.; Neill, J. D.; Perrett, K.; Pritchet, C. J.; Sullivan, M.

    2005-12-01

    The first 71 confirmed Ia supernovae from the Supernova Legacy Survey being conducted with CFHT imaging and Gemini, VLT and Keck spectroscopy set limits on variable dark energy cosmological models. For a generalized Chaplygin gas, in which the dark energy content is (1-Ω M)/ρ a, we find that a is statistically consistent with zero, with a best fit a=-0.2±-0.3 (68 systematic errors requires a further refinement of the photometric calibration and the potential model biases. A variable dark energy equation of state with w=w0+w_1 z shows the expected degeneracy between increasingly positive w0 and negative w1. The existing data rule out the parameters of the Weller & Linder (2002) Super-gravity inspired model cosmology (w0,w_1)=(-0.81,0.31). The full 700 Ia of the completed survey will provide a statistical error limit of w1 of about 0.2 and significant constraints on variable w models. The Canadian NSERC provided funding for the scientific analysis. These results are based on observations obtained at the CFHT, Gemini, VLT and Keck observatories.

  5. THE ENVIRONMENT AND SUSCEPTIBILITY TO SCHIZOPHRENIA

    PubMed Central

    Brown, Alan S.

    2010-01-01

    In the present article the putative role of environmental factors in schizophrenia is reviewed and synthesized. Accumulating evidence from recent studies suggests that environmental exposures may play a more significant role in the etiopathogenesis of this disorder than previously thought. This expanding knowledge base is largely a consequence of refinements in the methodology of epidemiologic studies, including birth cohort investigations, and in preclinical research that has been inspired by the evolving literature on animal models of environmental exposures. The bulk of evidence supports a contribution of environmental factors acting during fetal and perinatal life; these include infections, nutritional deficiencies, paternal age, fetal/neonatal hypoxic insults, maternal stress and other exposures. A considerable amount of data supports cannabis use in adolescence, migration, unfavorable neighborhood environments, and possibly infections at different points in the lifespan as risk factors for schizophrenia. Animal models have yielded evidence suggesting that these exposures cause brain and behavioral phenotypes that are analogous to findings observed in patients with schizophrenia. It is suggested that future studies attempt to replicate these findings, identify new risk factors, explore the gestational specificity of environmental insults, elaborate developmental trajectories, and examine relationships between environmental exposures and structural and functional brain anomalies in schizophrenia patients. Future research on gene-environment interactions and epigenetic effects of environmental exposures should shed further light on genes and exposures that may not be identified in the absence of these integrated approaches. Moreover, translational studies should further facilitate the discovery of neurodevelopmental mechanisms that increase susceptibility to schizophrenia. The study of environmental factors in schizophrenia may have important implications for the prevention of this disorder, and offers the potential to complement, and refine, existing efforts on explanatory neurodevelopmental models. PMID:955757

  6. Extending data worth methods to select multiple observations targeting specific hydrological predictions of interest

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, Troels N.; Ferré, Ty P. A.

    2016-04-01

    Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.

  7. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  8. Modelling dynamics in protein crystal structures by ensemble refinement

    PubMed Central

    Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet

    2012-01-01

    Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785

  9. Exploring the safe and just operating space in an inhomogeneous world

    NASA Astrophysics Data System (ADS)

    Barfuss, Wolfram; Beronov, Boyan; Wiedermann, Marc; Donges, Jonathan

    2015-04-01

    The Anthropocene has become reality during the 20th century, implying that our species is pressuring the Earth's ecosystems on a global scale. In the meantime, the challenge of eradicating poverty has not yet ceased to exist. Effectively dealing with these issues requires us to better understand the driving forces, feedback loops and tipping elements in the whole Earth system, constituted by natural and social components. To take a step forward in this direction, we refine an existing conceptual coevolutionary model of social and ecological domains (COPAN:EXPLOIT) by introducing inhomogeneities in the properties of local renewable resource stocks that are abstracted from real-world data. We then propose an analytical framework, 'the safe and just space'- plot, which aligns with the current debate on how to simultaneously stay within planetary boundaries (Rockström et al., 2009) and at the same time ensure that social foundations are met (Raworth, 2012). This plot presents a practical tool for jointly studying global socio-ecological models as well as real-world observations. First results from comparing the model outputs with real-world data indicate that the current state of the world is neither particularly safe nor particularly just. References: Rockström, Johan, et al. "A safe operating space for humanity." Nature 461.7263 (2009): 472-475. Raworth, Kate. "A safe and just space for humanity: can we live within the doughnut?" Oxfam Discussion Papers (2012): 1-26.

  10. Animal Models in Forensic Science Research: Justified Use or Ethical Exploitation?

    PubMed

    Mole, Calvin Gerald; Heyns, Marise

    2018-05-01

    A moral dilemma exists in biomedical research relating to the use of animal or human tissue when conducting scientific research. In human ethics, researchers need to justify why the use of humans is necessary should suitable models exist. Conversely, in animal ethics, a researcher must justify why research cannot be carried out on suitable alternatives. In the case of medical procedures or therapeutics testing, the use of animal models is often justified. However, in forensic research, the justification may be less evident, particularly when research involves the infliction of trauma on living animals. To determine how the forensic science community is dealing with this dilemma, a review of literature within major forensic science journals was conducted. The frequency and trends of the use of animals in forensic science research was investigated for the period 1 January 2012-31 December 2016. The review revealed 204 original articles utilizing 5050 animals in various forms as analogues for human tissue. The most common specimens utilized were various species of rats (35.3%), pigs (29.3%), mice (17.7%), and rabbits (8.2%) although different specimens were favored in different study themes. The majority of studies (58%) were conducted on post-mortem specimens. It is, however, evident that more needs to be done to uphold the basic ethical principles of reduction, refinement and replacement in the use of animals for research purposes.

  11. Princeton_TIGRESS 2.0: High refinement consistency and net gains through support vector machines and molecular dynamics in double-blind predictions during the CASP11 experiment.

    PubMed

    Khoury, George A; Smadbeck, James; Kieslich, Chris A; Koskosidis, Alexandra J; Guzman, Yannis A; Tamamis, Phanourios; Floudas, Christodoulos A

    2017-06-01

    Protein structure refinement is the challenging problem of operating on any protein structure prediction to improve its accuracy with respect to the native structure in a blind fashion. Although many approaches have been developed and tested during the last four CASP experiments, a majority of the methods continue to degrade models rather than improve them. Princeton_TIGRESS (Khoury et al., Proteins 2014;82:794-814) was developed previously and utilizes separate sampling and selection stages involving Monte Carlo and molecular dynamics simulations and classification using an SVM predictor. The initial implementation was shown to consistently refine protein structures 76% of the time in our own internal benchmarking on CASP 7-10 targets. In this work, we improved the sampling and selection stages and tested the method in blind predictions during CASP11. We added a decomposition of physics-based and hybrid energy functions, as well as a coordinate-free representation of the protein structure through distance-binning Cα-Cα distances to capture fine-grained movements. We performed parameter estimation to optimize the adjustable SVM parameters to maximize precision while balancing sensitivity and specificity across all cross-validated data sets, finding enrichment in our ability to select models from the populations of similar decoys generated for targets in CASPs 7-10. The MD stage was enhanced such that larger structures could be further refined. Among refinement methods that are currently implemented as web-servers, Princeton_TIGRESS 2.0 demonstrated the most consistent and most substantial net refinement in blind predictions during CASP11. The enhanced refinement protocol Princeton_TIGRESS 2.0 is freely available as a web server at http://atlas.engr.tamu.edu/refinement/. Proteins 2017; 85:1078-1098. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. A Dialogic Inquiry Approach to Working with Teachers in Developing Classroom Dialogue

    ERIC Educational Resources Information Center

    Hennessy, Sara; Mercer, Neil; Warwick, Paul

    2011-01-01

    Background/Context: This article describes how we refined an innovative methodology for equitable collaboration between university researchers and classroom practitioners building and refining theory together. The work builds on other coinquiry models in which complementary professional expertise is respected and deliberately exploited in order to…

  13. REFINING FIRE EMISSIONS FOR AIR QUALITY MODELING WITH REMOTELY-SENSED FIRE COUNTS: A WILDFIRE CASE STUDY

    EPA Science Inventory

    This paper examines the use of Moderate Resolution Imaging Spectroradiometer (MODIS) observed active fire data (pixel counts) to refine the National Emissions Inventory (NEI) fire emission estimates for major wildfire events. This study was motivated by the extremely limited info...

  14. Global and Local Existence for the Dissipative Critical SQG Equation with Small Oscillations

    NASA Astrophysics Data System (ADS)

    Lazar, Omar

    2015-09-01

    This article is devoted to the study of the critical dissipative surface quasi-geostrophic ( SQG) equation in . For any initial data belonging to the space , we show that the critical (SQG) equation has at least one global weak solution in time for all 1/4 ≤ s ≤ 1/2 and at least one local weak solution in time for all 0 < s < 1/4. The proof for the global existence is based on a new energy inequality which improves the one obtain in Lazar (Commun Math Phys 322:73-93, 2013) whereas the local existence uses more refined energy estimates based on Besov space techniques.

  15. Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garaizar, F.X.; Trangenstein, J.

    1998-09-01

    In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less

  16. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  17. Determination of the optimal level for combining area and yield estimates

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.

    1981-01-01

    Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.

  18. i3Drefine software for protein 3D structure refinement and its assessment in CASP10.

    PubMed

    Bhattacharya, Debswapna; Cheng, Jianlin

    2013-01-01

    Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.

  19. Application of multivariate analysis and mass transfer principles for refinement of a 3-L bioreactor scale-down model--when shake flasks mimic 15,000-L bioreactors better.

    PubMed

    Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa

    2015-01-01

    Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.

  20. Lagrangian Modeling of Evaporating Sprays at Diesel Engine Conditions: Effects of Multi-Hole Injector Nozzles With JP-8 Surrogates

    DTIC Science & Technology

    2014-05-01

    solver to treat the spray process. An Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with...Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with high fidelity while keeping the cell...in single and multi-hole nozzle configurations. The models were added to the present CONVERGE liquid fuel database and validated extensively

  1. BPS States, Crystals, and Matrices

    DOE PAGES

    Sułkowski, Piotr

    2011-01-01

    We review free fermion, melting crystal, and matrix model representations of wall-crossing phenomena on local, toric Calabi-Yau manifolds. We consider both unrefined and refined BPS counting of closed BPS states involving D2- and D0-branes bound to a D6-brane, as well as open BPS states involving open D2-branes ending on an additional D4-brane. Appropriate limit of these constructions provides, among the others, matrix model representation of refined and unrefined topological string amplitudes.

  2. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE PAGES

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...

    2017-06-21

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  3. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  4. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE PAGES

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...

    2016-11-09

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  5. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  6. Designing and Undertaking a Health Economics Study of Digital Health Interventions.

    PubMed

    McNamee, Paul; Murray, Elizabeth; Kelly, Michael P; Bojke, Laura; Chilcott, Jim; Fischer, Alastair; West, Robert; Yardley, Lucy

    2016-11-01

    This paper introduces and discusses key issues in the economic evaluation of digital health interventions. The purpose is to stimulate debate so that existing economic techniques may be refined or new methods developed. The paper does not seek to provide definitive guidance on appropriate methods of economic analysis for digital health interventions. This paper describes existing guides and analytic frameworks that have been suggested for the economic evaluation of healthcare interventions. Using selected examples of digital health interventions, it assesses how well existing guides and frameworks align to digital health interventions. It shows that digital health interventions may be best characterized as complex interventions in complex systems. Key features of complexity relate to intervention complexity, outcome complexity, and causal pathway complexity, with much of this driven by iterative intervention development over time and uncertainty regarding likely reach of the interventions among the relevant population. These characteristics imply that more-complex methods of economic evaluation are likely to be better able to capture fully the impact of the intervention on costs and benefits over the appropriate time horizon. This complexity includes wider measurement of costs and benefits, and a modeling framework that is able to capture dynamic interactions among the intervention, the population of interest, and the environment. The authors recommend that future research should develop and apply more-flexible modeling techniques to allow better prediction of the interdependency between interventions and important environmental influences. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  7. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  8. Exploring newly qualified doctors' workplace stressors: an interview study from Australia

    PubMed Central

    Tallentire, Victoria R; Smith, Samantha E; Facey, Adam D; Rotstein, Laila

    2017-01-01

    Purpose Postgraduate year 1 (PGY1) doctors suffer from high levels of psychological distress, yet the contributory factors are poorly understood. This study used an existing model of workplace stress to explore the elements most pertinent to PGY1 doctors. In turn, the data were used to amend and refine the conceptual model to better reflect the unique experiences of PGY1 doctors. Method Focus groups were undertaken with PGY1 doctors working at four different health services in Victoria, Australia. Transcripts were coded using Michie's model of workplace stress as the initial coding template. Remaining text was coded inductively and the supplementary codes were used to modify and amplify Michie's framework. Results There were 37 participants in total. Key themes included stressors intrinsic to the job, such as work overload and long hours, as well as those related to the context of work such as lack of role clarity and relationships with colleagues. The main modification to Michie's framework was the addition of the theme of uncertainty. This concept related to most of the pre-existing themes in complex ways, culminating in an overall sense of anxiety. Conclusions Michie's model of workplace stress can be effectively used to explore the stressors experienced by PGY1 doctors. Pervasive uncertainty may help to explain the high levels of psychological morbidity in this group. While some uncertainty will always remain, the medical education community must seek ways to improve role clarity and promote mutual respect. PMID:28801411

  9. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE PAGES

    Li, Mingjie; Zhou, Ping; Wang, Hong; ...

    2017-09-19

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  10. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingjie; Zhou, Ping; Wang, Hong

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  11. Revisiting a many-body model for water based on a single polarizable site: from gas phase clusters to liquid and air/liquid water systems.

    PubMed

    Réal, Florent; Vallet, Valérie; Flament, Jean-Pierre; Masella, Michel

    2013-09-21

    We present a revised version of the water many-body model TCPE [M. Masella and J.-P. Flament, J. Chem. Phys. 107, 9105 (1997)], which is based on a static three charge sites and a single polarizable site to model the molecular electrostatic properties of water, and on an anisotropic short range many-body energy term specially designed to accurately model hydrogen bonding in water. The parameters of the revised model, denoted TCPE/2013, are here developed to reproduce the ab initio energetic and geometrical properties of small water clusters (up to hexamers) and the repulsive water interactions occurring in cation first hydration shells. The model parameters have also been refined to reproduce two liquid water properties at ambient conditions, the density and the vaporization enthalpy. Thanks to its computational efficiency, the new model range of applicability was validated by performing simulations of liquid water over a wide range of temperatures and pressures, as well as by investigating water liquid/vapor interfaces over a large range of temperatures. It is shown to reproduce several important water properties at an accurate enough level of precision, such as the existence liquid water density maxima up to a pressure of 1000 atm, the water boiling temperature, the properties of the water critical point (temperature, pressure, and density), and the existence of a "singularity" temperature at about 225 K in the supercooled regime. This model appears thus to be particularly well-suited for characterizing ion hydration properties under different temperature and pressure conditions, as well as in different phases and interfaces.

  12. Model Refinement and Simulation of Groundwater Flow in Clinton, Eaton, and Ingham Counties, Michigan

    USGS Publications Warehouse

    Luukkonen, Carol L.

    2010-01-01

    A groundwater-flow model that was constructed in 1996 of the Saginaw aquifer was refined to better represent the regional hydrologic system in the Tri-County region, which consists of Clinton, Eaton, and Ingham Counties, Michigan. With increasing demand for groundwater, the need to manage withdrawals from the Saginaw aquifer has become more important, and the 1996 model could not adequately address issues of water quality and quantity. An updated model was needed to better address potential effects of drought, locally high water demands, reduction of recharge by impervious surfaces, and issues affecting water quality, such as contaminant sources, on water resources and the selection of pumping rates and locations. The refinement of the groundwater-flow model allows simulations to address these issues of water quantity and quality and provides communities with a tool that will enable them to better plan for expansion and protection of their groundwater-supply systems. Model refinement included representation of the system under steady-state and transient conditions, adjustments to the estimated regional groundwater-recharge rates to account for both temporal and spatial differences, adjustments to the representation and hydraulic characteristics of the glacial deposits and Saginaw Formation, and updates to groundwater-withdrawal rates to reflect changes from the early 1900s to 2005. Simulations included steady-state conditions (in which stresses remained constant and changes in storage were not included) and transient conditions (in which stresses changed in annual and monthly time scales and changes in storage within the system were included). These simulations included investigation of the potential effects of reduced recharge due to impervious areas or to low-rainfall/drought conditions, delineation of contributing areas with recent pumping rates, and optimization of pumping subject to various quantity and quality constraints. Simulation results indicate potential declines in water levels in both the upper glacial aquifer and the upper sandstone bedrock aquifer under steady-state and transient conditions when recharge was reduced by 20 and 50 percent in urban areas. Transient simulations were done to investigate reduced recharge due to low rainfall and increased pumping to meet anticipated future demand with 24 months (2 years) of modified recharge or modified recharge and pumping rates. During these two simulation years, monthly recharge rates were reduced by about 30 percent, and monthly withdrawal rates for Lansing area production wells were increased by 15 percent. The reduction in the amount of water available to recharge the groundwater system affects the upper model layers representing the glacial aquifers more than the deeper bedrock layers. However, with a reduction in recharge and an increase in withdrawals from the bedrock aquifer, water levels in the bedrock layers are affected more than those in the glacial layers. Differences in water levels between simulations with reduced recharge and reduced recharge with increased pumping are greatest in the Lansing area and least away from pumping centers, as expected. Additionally, the increases in pumping rates had minimal effect on most simulated streamflows. Additional simulations included updating the estimated 10-year wellhead-contributing areas for selected Lansing-area wells under 2006-7 pumping conditions. Optimization of groundwater withdrawals with a water-resource management model was done to determine withdrawal rates while minimizing operational costs and to determine withdrawal locations to achieve additional capacity while meeting specified head constraints. In these optimization scenarios, the desired groundwater withdrawals are achieved by simulating managed wells (where pumping rates can be optimized) and unmanaged wells (where pumping rates are not optimized) and by using various combinations of existing and proposed well locations.

  13. "All Abroad": Malaysians' Reasons for Seeking an Overseas-Based Doctorate

    ERIC Educational Resources Information Center

    Tagg, Brendon

    2014-01-01

    This article examines the process by which nine junior Malaysian academics came to complete doctoral degrees in non-Malaysian universities. It expands the scope and refines the focus of an existing study that considered international students' experiences in New Zealand. Part of the motivation for the current study was the researcher's recognition…

  14. 38 CFR 36.4340 - Underwriting standards, processing procedures, lender responsibility, and lender certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... streamlined refinance loan would not increase the principal balance outstanding on the prior existing... refinancing loans; (vii) Little or no increase in shelter expense; (viii) Military benefits; (ix) Satisfactory... continuing nature, such as tax credits for child care; and (xiii) Tax benefits of home ownership. (6) The...

  15. Elements, Principles, and Critical Inquiry for Identity-Centered Design of Online Environments

    ERIC Educational Resources Information Center

    Dudek, Jaclyn; Heiser, Rebecca

    2017-01-01

    Within higher education, a need exists for learning designs that facilitate education and support students in sharing, examining, and refining their critical identities as learners and professionals. In the past, technology-mediated identity work has focused on individual tool use or a learning setting. However, we as professional learning…

  16. Building the Wireless Campus

    ERIC Educational Resources Information Center

    Gerraughty, James F.; Shanafelt, Michael E.

    2005-01-01

    This prototype is a continuation of a series of wireless prototypes which began in August 2001 and was reported on again in August 2002. This is the final year of this prototype. This continuation allowed Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA) to refine the existing WLAN for the Saint…

  17. What Does Research on Computer-Based Instruction Have to Say to the Reading Teacher?

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    1987-01-01

    Examines questions typically asked about the effectiveness of computer-based reading instruction, suggesting that these questions must be refined to provide meaningful insight into the issues involved. Describes several critical problems with existing research and presents overviews of research on the effects of computer-based instruction on…

  18. The Race to Refinance Debt: Market Offers Opportunities to Reduce Interest Costs.

    ERIC Educational Resources Information Center

    DuPont, Lorrie A.

    1992-01-01

    In this interest market, colleges and universities could benefit from careful evaluation of debt portfolios. Refinancing debt is an opportunity to lower debt service costs, ease cash flow, change security pledges, eliminate debt service reserves, update bond documents. Timing is important. Existing and new bonds can also be combined…

  19. Engaging Participants without Leaving the Office: Planning and Conducting Effective Webinars

    ERIC Educational Resources Information Center

    Robinson, Julie; Poling, Mary

    2017-01-01

    The University of Arkansas System Division of Agriculture Cooperative Extension Service has been developing and refining webinar delivery practices since 2012. On the basis of a review of existing literature and our own experiences, we have established methods for necessary planning, organization of content and people, and effective delivery of…

  20. The Ethical Issues Rating Scale: An Instrument for Measuring Ethical Orientation of College Students toward Various Business Practices.

    ERIC Educational Resources Information Center

    Daniel, Larry G.; And Others

    1997-01-01

    Factor analysis of data from 213 college business students supported the existence of 5 constructs for the Ethical Issues Rating Scale, an instrument measuring respondents' assessment of the importance of various ethical issues. Suggestions about refining the instrument and using it are discussed. (SLD)

Top