Sample records for testing refinement simulation

  1. Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, T. N.; Christensen, S.

    2009-12-01

    In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12

  2. Simulation optimization of the cathode deposit growth in a coaxial electrolyzer-refiner

    NASA Astrophysics Data System (ADS)

    Smirnov, G. B.; Fokin, A. A.; Markina, S. E.; Vakhitov, A. I.

    2015-08-01

    The results of simulation of the cathode deposit growth in a coaxial electrolyzer-refiner are presented. The sizes of the initial cathode matrix are optimized. The data obtained by simulation and full-scale tests of the precipitation of platinum from a salt melt are compared.

  3. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  4. Structure refinement of membrane proteins via molecular dynamics simulations.

    PubMed

    Dutagaci, Bercem; Heo, Lim; Feig, Michael

    2018-07-01

    A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.

  5. Dynamic particle refinement in SPH: application to free surface flow and non-cohesive soil simulations

    NASA Astrophysics Data System (ADS)

    Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos

    2013-05-01

    In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.

  6. A Refined Computer Harassment Paradigm: Validation, and Test of Hypotheses about Target Characteristics

    ERIC Educational Resources Information Center

    Siebler, Frank; Sabelus, Saskia; Bohner, Gerd

    2008-01-01

    A refined computer paradigm for assessing sexual harassment is presented, validated, and used for testing substantive hypotheses. Male participants were given an opportunity to send sexist jokes to a computer-simulated female chat partner. In Study 1 (N = 44), the harassment measure (number of sexist jokes sent) correlated positively with…

  7. Some observations on mesh refinement schemes applied to shock wave phenomena

    NASA Technical Reports Server (NTRS)

    Quirk, James J.

    1995-01-01

    This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrnstein, Aaron R.

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less

  9. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  10. Toward Improved Description of DNA Backbone: Revisiting Epsilon and Zeta Torsion Force Field Parameters

    PubMed Central

    Zgarbová, Marie; Luque, F. Javier; Šponer, Jiří; Cheatham, Thomas E.; Otyepka, Michal; Jurečka, Petr

    2013-01-01

    We present a refinement of the backbone torsion parameters ε and ζ of the Cornell et al. AMBER force field for DNA simulations. The new parameters, denoted as εζOL1, were derived from quantum-mechanical calculations with inclusion of conformation-dependent solvation effects according to the recently reported methodology (J. Chem. Theory Comput. 2012, 7(9), 2886-2902). The performance of the refined parameters was analyzed by means of extended molecular dynamics (MD) simulations for several representative systems. The results showed that the εζOL1 refinement improves the backbone description of B-DNA double helices and G-DNA stem. In B-DNA simulations, we observed an average increase of the helical twist and narrowing of the major groove, thus achieving better agreement with X-ray and solution NMR data. The balance between populations of BI and BII backbone substates was shifted towards the BII state, in better agreement with ensemble-refined solution experimental results. Furthermore, the refined parameters decreased the backbone RMS deviations in B-DNA MD simulations. In the antiparallel guanine quadruplex (G-DNA) the εζOL1 modification improved the description of non-canonical α/γ backbone substates, which were shown to be coupled to the ε/ζ torsion potential. Thus, the refinement is suggested as a possible alternative to the current ε/ζ torsion potential, which may enable more accurate modeling of nucleic acids. However, long-term testing is recommended before its routine application in DNA simulations. PMID:24058302

  11. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  12. Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2010-01-01

    This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.

  13. Reliability Assessment of GaN Power Switches

    DTIC Science & Technology

    2015-04-17

    Possibilities for single event burnout testing were examined as well. Device simulation under the conditions of some of the testing was performed on...reverse-bias (HTRB) and single electron burnout (SEE) tests. 8. Refine test structures, circuits, and procedures, and, if possible, develop

  14. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  15. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  16. Effect of Deformation Parameters on Microstructure and Properties During DIFT of X70HD Pipeline Steel

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Zhu, Wei; Xiao, Hong; Zhang, Liang-liang; Qin, Hao; Yu, Yue

    2018-02-01

    Grain refinement is a critical approach to improve the strength of materials without damaging the toughness. The grains of deformation-induced ferrite are considerably smaller than those of proeutectoid ferrite. Grain refinement is crucial to the application of deformation-induced ferrite. The composition of ferrite and bainite or martensite is important in controlling the performance of X70HD pipeline steel, and cooling significantly influences the control of their ratio and grain size. By analyzing the static and dynamic phase-transition points using Gleeble-3800 thermal simulator, thermal simulations were performed through two-stage deformations in the austenite zone. Ferrite transformation rules were studied with thermal simulation tests under different deformation and cooling parameters based on the actual production of cumulative deformation. The influence of deformation parameters on the microstructure transformation was analyzed. Numerous fine-grain deformation-induced ferrites were obtained by regulating various parameters, including deformation temperature, strain rate, cooling rate, final cooling temperature and other parameters. Results of metallographic observation and microtensile testing revealed that the selection of appropriate parameters can refine the grains and improve the performance of the X70HD pipeline steel.

  17. PREFMD: a web server for protein structure refinement via molecular dynamics simulations.

    PubMed

    Heo, Lim; Feig, Michael

    2018-03-15

    Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.

  18. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE PAGES

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans; ...

    2016-11-09

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  19. Analyzing the Adaptive Mesh Refinement (AMR) Characteristics of a High-Order 2D Cubed-Sphere Shallow-Water Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jared O.; Jablonowski, Christiane; Johansen, Hans

    Adaptive mesh refinement (AMR) is a technique that has been featured only sporadically in atmospheric science literature. This study aims to demonstrate the utility of AMR for simulating atmospheric flows. Several test cases are implemented in a 2D shallow-water model on the sphere using the Chombo-AMR dynamical core. This high-order finite-volume model implements adaptive refinement in both space and time on a cubed-sphere grid using a mapped-multiblock mesh technique. The tests consist of the passive advection of a tracer around moving vortices, a steady-state geostrophic flow, an unsteady solid-body rotation, a gravity wave impinging on a mountain, and the interactionmore » of binary vortices. Both static and dynamic refinements are analyzed to determine the strengths and weaknesses of AMR in both complex flows with small-scale features and large-scale smooth flows. The different test cases required different AMR criteria, such as vorticity or height-gradient based thresholds, in order to achieve the best accuracy for cost. The simulations show that the model can accurately resolve key local features without requiring global high-resolution grids. The adaptive grids are able to track features of interest reliably without inducing noise or visible distortions at the coarse–fine interfaces. Finally and furthermore, the AMR grids keep any degradations of the large-scale smooth flows to a minimum.« less

  20. Computer simulation of refining process of a high consistency disc refiner based on CFD

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Jianwei; Wang, Jiahui

    2017-08-01

    In order to reduce refining energy consumption, the ANSYS CFX was used to simulate the refining process of a high consistency disc refiner. In the first it was assumed to be uniform Newton fluid of turbulent state in disc refiner with the k-ɛ flow model; then meshed grids and set the boundary conditions in 3-D model of the disc refiner; and then was simulated and analyzed; finally, the viscosity of the pulp were measured. The results show that the CFD method can be used to analyze the pressure and torque on the disc plate, so as to calculate the refining power, and streamlines and velocity vectors can also be observed. CFD simulation can optimize parameters of the bar and groove, which is of great significance to reduce the experimental cost and cycle.

  1. MODPATH-LGR; documentation of a computer program for particle tracking in shared-node locally refined grids by using MODFLOW-LGR

    USGS Publications Warehouse

    Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.

  2. The effect of gas physics on the halo mass function

    NASA Astrophysics Data System (ADS)

    Stanek, R.; Rudd, D.; Evrard, A. E.

    2009-03-01

    Cosmological tests based on cluster counts require accurate calibration of the space density of massive haloes, but most calibrations to date have ignored complex gas physics associated with halo baryons. We explore the sensitivity of the halo mass function to baryon physics using two pairs of gas-dynamic simulations that are likely to bracket the true behaviour. Each pair consists of a baseline model involving only gravity and shock heating, and a refined physics model aimed at reproducing the observed scaling of the hot, intracluster gas phase. One pair consists of billion-particle resimulations of the original 500h-1Mpc Millennium Simulation of Springel et al., run with the smoothed particle hydrodynamics (SPH) code GADGET-2 and using a refined physics treatment approximated by pre-heating (PH) at high redshift. The other pair are high-resolution simulations from the adaptive-mesh refinement code ART, for which the refined treatment includes cooling, star formation and supernova feedback (CSF). We find that, although the mass functions of the gravity-only (GO) treatments are consistent with the recent calibration of Tinker et al. (2008), both pairs of simulations with refined baryon physics show significant deviations. Relative to the GO case, the masses of ~1014h-1Msolar haloes in the PH and CSF treatments are shifted by the averages of -15 +/- 1 and +16 +/- 2 per cent, respectively. These mass shifts cause ~30 per cent deviations in number density relative to the Tinker function, significantly larger than the 5 per cent statistical uncertainty of that calibration.

  3. Discontinuous Galerkin Methods for Turbulence Simulation

    NASA Technical Reports Server (NTRS)

    Collis, S. Scott

    2002-01-01

    A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.

  4. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.

  5. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  6. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  7. Simulation and Digitization of a Gas Electron Multiplier Detector Using Geant4 and an Object-Oriented Digitization Program

    NASA Astrophysics Data System (ADS)

    McMullen, Timothy; Liyanage, Nilanga; Xiong, Weizhi; Zhao, Zhiwen

    2017-01-01

    Our research has focused on simulating the response of a Gas Electron Multiplier (GEM) detector using computational methods. GEM detectors provide a cost effective solution for radiation detection in high rate environments. A detailed simulation of GEM detector response to radiation is essential for the successful adaption of these detectors to different applications. Using Geant4 Monte Carlo (GEMC), a wrapper around Geant4 which has been successfully used to simulate the Solenoidal Large Intensity Device (SoLID) at Jefferson Lab, we are developing a simulation of a GEM chamber similar to the detectors currently used in our lab. We are also refining an object-oriented digitization program, which translates energy deposition information from GEMC into electronic readout which resembles the readout from our physical detectors. We have run the simulation with beta particles produced by the simulated decay of a 90Sr source, as well as with a simulated bremsstrahlung spectrum. Comparing the simulation data with real GEM data taken under similar conditions is used to refine the simulation parameters. Comparisons between results from the simulations and results from detector tests will be presented.

  8. Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1

    NASA Astrophysics Data System (ADS)

    Rathna Kumar, S.

    2017-09-01

    In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.

  9. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  10. Partial unfolding and refolding for structure refinement: A unified approach of geometric simulations and molecular dynamics.

    PubMed

    Kumar, Avishek; Campitelli, Paul; Thorpe, M F; Ozkan, S Banu

    2015-12-01

    The most successful protein structure prediction methods to date have been template-based modeling (TBM) or homology modeling, which predicts protein structure based on experimental structures. These high accuracy predictions sometimes retain structural errors due to incorrect templates or a lack of accurate templates in the case of low sequence similarity, making these structures inadequate in drug-design studies or molecular dynamics simulations. We have developed a new physics based approach to the protein refinement problem by mimicking the mechanism of chaperons that rehabilitate misfolded proteins. The template structure is unfolded by selectively (targeted) pulling on different portions of the protein using the geometric based technique FRODA, and then refolded using hierarchically restrained replica exchange molecular dynamics simulations (hr-REMD). FRODA unfolding is used to create a diverse set of topologies for surveying near native-like structures from a template and to provide a set of persistent contacts to be employed during re-folding. We have tested our approach on 13 previous CASP targets and observed that this method of folding an ensemble of partially unfolded structures, through the hierarchical addition of contact restraints (that is, first local and then nonlocal interactions), leads to a refolding of the structure along with refinement in most cases (12/13). Although this approach yields refined models through advancement in sampling, the task of blind selection of the best refined models still needs to be solved. Overall, the method can be useful for improved sampling for low resolution models where certain of the portions of the structure are incorrectly modeled. © 2015 Wiley Periodicals, Inc.

  11. Simulation and Analysis of Converging Shock Wave Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Shashkov, Mikhail J.

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less

  12. Application of FUN3D Solver for Aeroacoustics Simulation of a Nose Landing Gear Configuration

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.

    2011-01-01

    Numerical simulations have been performed for a nose landing gear configuration corresponding to the experimental tests conducted in the Basic Aerodynamic Research Tunnel at NASA Langley Research Center. A widely used unstructured grid code, FUN3D, is examined for solving the unsteady flow field associated with this configuration. A series of successively finer unstructured grids has been generated to assess the effect of grid refinement. Solutions have been obtained on purely tetrahedral grids as well as mixed element grids using hybrid RANS/LES turbulence models. The agreement of FUN3D solutions with experimental data on the same size mesh is better on mixed element grids compared to pure tetrahedral grids, and in general improves with grid refinement.

  13. The Objective Borderline Method: A Probabilistic Method for Standard Setting

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim

    2015-01-01

    A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…

  14. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  15. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1988-08-31

    refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct

  16. Cart3D Simulations for the Second AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2017-01-01

    Simulation results are presented for all test cases prescribed in the Second AIAA Sonic Boom Prediction Workshop. For each of the four nearfield test cases, we compute pressure signatures at specified distances and off-track angles, using an inviscid, embedded-boundary Cartesian-mesh flow solver with output-based mesh adaptation. The cases range in complexity from an axisymmetric body to a full low-boom aircraft configuration with a powered nacelle. For efficiency, boom carpets are decomposed into sets of independent meshes and computed in parallel. This also facilitates the use of more effective meshing strategies - each off-track angle is computed on a mesh with good azimuthal alignment, higher aspect ratio cells, and more tailored adaptation. The nearfield signatures generally exhibit good convergence with mesh refinement. We introduce a local error estimation procedure to highlight regions of the signatures most sensitive to mesh refinement. Results are also presented for the two propagation test cases, which investigate the effects of atmospheric profiles on ground noise. Propagation is handled with an augmented Burgers' equation method (NASA's sBOOM), and ground noise metrics are computed with LCASB.

  17. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.

  18. The design, fabrication and delivery of a spacelab neutral buoyancy Instrument Pointing System (IPS) mockup. [underwater training simulator

    NASA Technical Reports Server (NTRS)

    Vanvalkenburgh, C. N.

    1984-01-01

    Underwater simulations of EVA contingency operations such as manual jettison, payload disconnect, and payload clamp actuation were used to define crew aid needs and mockup pecularities and characteristics to verify the validity of simulation using the trainer. A set of mockup instrument pointing system tests was conducted and minor modifications and refinements were made. Flight configuration struts were tested and verified to be operable by the flight crew. Tasks involved in developing the following end items are described: IPS gimbal system, payload, and payload clamp assembly; the igloos (volumetric); spacelab pallets, experiments, and hardware; experiment, and hardware; experiment 7; and EVA hand tools, support hardware (handrails and foot restraints). The test plan preparation and test support are also covered.

  19. Heave-pitch-roll analysis and testing of air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Boghani, A. B.; Captain, K. M.; Wormley, D. N.

    1978-01-01

    The analytical tools (analysis and computer simulation) needed to explain and predict the dynamic operation of air cushion landing systems (ACLS) is described. The following tasks were performed: the development of improved analytical models for the fan and the trunk; formulation of a heave pitch roll analysis for the complete ACLS; development of a general purpose computer simulation to evaluate landing and taxi performance of an ACLS equipped aircraft; and the verification and refinement of the analysis by comparison with test data obtained through lab testing of a prototype cushion. Demonstration of simulation capabilities through typical landing and taxi simulation of an ACLS aircraft are given. Initial results show that fan dynamics have a major effect on system performance. Comparison with lab test data (zero forward speed) indicates that the analysis can predict most of the key static and dynamic parameters (pressure, deflection, acceleration, etc.) within a margin of a 10 to 25 percent.

  20. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  1. MODFLOW–LGR—Documentation of ghost node local grid refinement (LGR2) for multiple areas and the boundary flow and head (BFH2) package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2013-01-01

    This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.

  2. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  3. Construction of a 3D model of nattokinase, a novel fibrinolytic enzyme from Bacillus natto. A novel nucleophilic catalytic mechanism for nattokinase.

    PubMed

    Zheng, Zhong-liang; Zuo, Zhen-yu; Liu, Zhi-gang; Tsai, Keng-chang; Liu, Ai-fu; Zou, Guo-lin

    2005-01-01

    A three-dimensional structural model of nattokinase (NK) from Bacillus natto was constructed by homology modeling. High-resolution X-ray structures of Subtilisin BPN' (SB), Subtilisin Carlsberg (SC), Subtilisin E (SE) and Subtilisin Savinase (SS), four proteins with sequential, structural and functional homology were used as templates. Initial models of NK were built by MODELLER and analyzed by the PROCHECK programs. The best quality model was chosen for further refinement by constrained molecular dynamics simulations. The overall quality of the refined model was evaluated. The refined model NKC1 was analyzed by different protein analysis programs including PROCHECK for the evaluation of Ramachandran plot quality, PROSA for testing interaction energies and WHATIF for the calculation of packing quality. This structure was found to be satisfactory and also stable at room temperature as demonstrated by a 300ps long unconstrained molecular dynamics (MD) simulation. Further docking analysis promoted the coming of a new nucleophilic catalytic mechanism for NK, which is induced by attacking of hydroxyl rich in catalytic environment and locating of S221.

  4. New ghost-node method for linking different models with varied grid refinement

    USGS Publications Warehouse

    James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.

  5. Aerodynamic Characteristics of a Refined Deep-Step Planing-Tail Flying-Boat Hull with Various Forebody and Afterbody Shapes

    NASA Technical Reports Server (NTRS)

    Riebe, John M; Naeseth, Rodger L

    1953-01-01

    An investigation was made in the Langley 300 mph 7-by 10-foot tunnel to determine the aerodynamic characteristics of a refined deep-step planing-tail hull with various forebody and afterbody shapes. For comparison, tests were made on a streamline body simulating the fuselage of a modern transport airplane. The results of the tests, which include the interference effects of a 21-percent-thick support wing, indicated that for corresponding configurations the hull models incorporating a forebody with a length-beam ratio of 7 had lower minimum drag coefficients than the hull models incorporating a forebody with a length-beam ratio of 5. Longitudinal and lateral stability was generally about the same for all hull models tested and about the same as that of a conventional hull.

  6. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.

  7. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  8. Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2017-01-01

    The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.

  9. MODFLOW-2005, The U.S. Geological Survey Modular Ground-Water Model - Documentation of the Multiple-Refined-Areas Capability of Local Grid Refinement (LGR) and the Boundary Flow and Head (BFH) Package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2007-01-01

    This report documents the addition of the multiple-refined-areas capability to shared node Local Grid Refinement (LGR) and Boundary Flow and Head (BFH) Package of MODFLOW-2005, the U.S. Geological Survey modular, three-dimensional, finite-difference ground-water flow model. LGR now provides the capability to simulate ground-water flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. The ability to have multiple, nonoverlapping areas of refinement is important in situations where there is more than one area of concern within a regional model. In this circumstance, LGR can be used to simulate these distinct areas with higher resolution grids. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. The BFH Package can be used to simulate these situations by using either the parent or child models independently.

  10. Steel refining possibilities in LF

    NASA Astrophysics Data System (ADS)

    Dumitru, M. G.; Ioana, A.; Constantin, N.; Ciobanu, F.; Pollifroni, M.

    2018-01-01

    This article presents the main possibilities for steel refining in Ladle Furnace (LF). These, are presented: steelmaking stages, steel refining through argon bottom stirring, online control of the bottom stirring, bottom stirring diagram during LF treatment of a heat, porous plug influence over the argon stirring, bottom stirring porous plug, analysis of porous plugs disposal on ladle bottom surface, bottom stirring simulation with ANSYS, bottom stirring simulation with Autodesk CFD.

  11. Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.

    2017-12-01

    Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.

  12. Dynamic Modeling, Controls, and Testing for Electrified Aircraft

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph; Stalcup, Erik

    2017-01-01

    Electrified aircraft have the potential to provide significant benefits for efficiency and emissions reductions. To assess these potential benefits, modeling tools are needed to provide rapid evaluation of diverse concepts and to ensure safe operability and peak performance over the mission. The modeling challenge for these vehicles is the ability to show significant benefits over the current highly refined aircraft systems. The STARC-ABL (single-aisle turbo-electric aircraft with an aft boundary layer propulsor) is a new test proposal that builds upon previous N3-X team hybrid designs. This presentation describes the STARC-ABL concept, the NASA Electric Aircraft Testbed (NEAT) which will allow testing of the STARC-ABL powertrain, and the related modeling and simulation efforts to date. Modeling and simulation includes a turbofan simulation, Numeric Propulsion System Simulation (NPSS), which has been integrated with NEAT; and a power systems and control model for predicting testbed performance and evaluating control schemes. Model predictions provide good comparisons with testbed data for an NPSS-integrated test of the single-string configuration of NEAT.

  13. Refined Simulation of Satellite Laser Altimeter Full Echo Waveform

    NASA Astrophysics Data System (ADS)

    Men, H.; Xing, Y.; Li, G.; Gao, X.; Zhao, Y.; Gao, X.

    2018-04-01

    The return waveform of satellite laser altimeter plays vital role in the satellite parameters designation, data processing and application. In this paper, a method of refined full waveform simulation is proposed based on the reflectivity of the ground target, the true emission waveform and the Laser Profile Array (LPA). The ICESat/GLAS data is used as the validation data. Finally, we evaluated the simulation accuracy with the correlation coefficient. It was found that the accuracy of echo simulation could be significantly improved by considering the reflectivity of the ground target and the emission waveform. However, the laser intensity distribution recorded by the LPA has little effect on the echo simulation accuracy when compared with the distribution of the simulated laser energy. At last, we proposed a refinement idea by analyzing the experimental results, in the hope of providing references for the waveform data simulation and processing of GF-7 satellite in the future.

  14. Pyrotechnic shock: A literature survey of the Linear Shaped Charge (LSC)

    NASA Technical Reports Server (NTRS)

    Smith, J. L.

    1984-01-01

    Linear shaped charge (LSC) literature for the past 20 years is reviewed. The following topics are discussed: (1) LSC configuration; (2) LSC usage; (3) LSC induced pyroshock; (4) simulated pyrotechnic testing; (5) actual pyrotechnic testing; (6) data collection methods; (7) data analysis techniques; (8) shock reduction methods; and (9) design criteria. Although no new discoveries have been made in LSC research, charge shapes are improved to allow better cutting performance, testing instrumentation is refined, and some new explosives, for use in LSC, are formulated.

  15. Implementation of local grid refinement (LGR) for the Lake Michigan Basin regional groundwater-flow model

    USGS Publications Warehouse

    Hoard, C.J.

    2010-01-01

    The U.S. Geological Survey is evaluating water availability and use within the Great Lakes Basin. This is a pilot effort to develop new techniques and methods to aid in the assessment of water availability. As part of the pilot program, a regional groundwater-flow model for the Lake Michigan Basin was developed using SEAWAT-2000. The regional model was used as a framework for assessing local-scale water availability through grid-refinement techniques. Two grid-refinement techniques, telescopic mesh refinement and local grid refinement, were used to illustrate the capability of the regional model to evaluate local-scale problems. An intermediate model was developed in central Michigan spanning an area of 454 square miles (mi2) using telescopic mesh refinement. Within the intermediate model, a smaller local model covering an area of 21.7 mi2 was developed and simulated using local grid refinement. Recharge was distributed in space and time using a daily output from a modified Thornthwaite-Mather soil-water-balance method. The soil-water-balance method derived recharge estimates from temperature and precipitation data output from an atmosphere-ocean coupled general-circulation model. The particular atmosphere-ocean coupled general-circulation model used, simulated climate change caused by high global greenhouse-gas emissions to the atmosphere. The surface-water network simulated in the regional model was refined and simulated using a streamflow-routing package for MODFLOW. The refined models were used to demonstrate streamflow depletion and potential climate change using five scenarios. The streamflow-depletion scenarios include (1) natural conditions (no pumping), (2) a pumping well near a stream; the well is screened in surficial glacial deposits, (3) a pumping well near a stream; the well is screened in deeper glacial deposits, and (4) a pumping well near a stream; the well is open to a deep bedrock aquifer. Results indicated that a range of 59 to 50 percent of the water pumped originated from the stream for the shallow glacial and deep bedrock pumping scenarios, respectively. The difference in streamflow reduction between the shallow and deep pumping scenarios was compensated for in the deep well by deriving more water from regional sources. The climate-change scenario only simulated natural conditions from 1991-2044, so there was no pumping stress simulated. Streamflows were calculated for the simulated period and indicated that recharge over the period generally increased from the start of the simulation until approximately 2017, and decreased from then to the end of the simulation. Streamflow was highly correlated with recharge so that the lowest streamflows occurred in the later stress periods of the model when recharge was lowest.

  16. Simulation, guidance and navigation of the B-737 for rollout and turnoff using MLS measurements

    NASA Technical Reports Server (NTRS)

    Pines, S.; Schmidt, S. F.; Mann, F.

    1975-01-01

    A simulation program is described for the B-737 aircraft in landing approach, a touchdown, rollout and turnoff for normal and CAT III weather conditions. Preliminary results indicate that microwave landing systems can be used in place of instrument landing systems landing aids and that a single magnetic cable can be used for automated rollout and turnoff. Recommendations are made for further refinement of the model and additional testing to finalize a set of guidance laws for rollout and turnoff.

  17. TIMES-SS--recent refinements resulting from an industrial skin sensitisation consortium.

    PubMed

    Patlewicz, G; Kuseva, C; Mehmed, A; Popova, Y; Dimitrova, G; Ellis, G; Hunziker, R; Kern, P; Low, L; Ringeissen, S; Roberts, D W; Mekenyan, O

    2014-01-01

    The TImes MEtabolism Simulator platform for predicting Skin Sensitisation (TIMES-SS) is a hybrid expert system, first developed at Bourgas University using funding and data from a consortium of industry and regulators. TIMES-SS encodes structure-toxicity and structure-skin metabolism relationships through a number of transformations, some of which are underpinned by mechanistic 3D QSARs. The model estimates semi-quantitative skin sensitisation potency classes and has been developed with the aim of minimising animal testing, and also to be scientifically valid in accordance with the OECD principles for (Q)SAR validation. In 2007 an external validation exercise was undertaken to fully address these principles. In 2010, a new industry consortium was established to coordinate research efforts in three specific areas: refinement of abiotic reactions in the skin (namely autoxidation) in the skin, refinement of the manner in which chemical reactivity was captured in terms of structure-toxicity rules (inclusion of alert reliability parameters) and defining the domain based on the underlying experimental data (study of discrepancies between local lymph node assay Local Lymph Node Assay (LLNA) and Guinea Pig Maximisation Test (GPMT)). The present paper summarises the progress of these activities and explains how the insights derived have been translated into refinements, resulting in increased confidence and transparency in the robustness of the TIMES-SS predictions.

  18. Using toxicokinetic-toxicodynamic modeling as an acute risk assessment refinement approach in vertebrate ecological risk assessment.

    PubMed

    Ducrot, Virginie; Ashauer, Roman; Bednarska, Agnieszka J; Hinarejos, Silvia; Thorbek, Pernille; Weyman, Gabriel

    2016-01-01

    Recent guidance identified toxicokinetic-toxicodynamic (TK-TD) modeling as a relevant approach for risk assessment refinement. Yet, its added value compared to other refinement options is not detailed, and how to conduct the modeling appropriately is not explained. This case study addresses these issues through 2 examples of individual-level risk assessment for 2 hypothetical plant protection products: 1) evaluating the risk for small granivorous birds and small omnivorous mammals of a single application, as a seed treatment in winter cereals, and 2) evaluating the risk for fish after a pulsed treatment in the edge-of-field zone. Using acute test data, we conducted the first tier risk assessment as defined in the European Food Safety Authority (EFSA) guidance. When first tier risk assessment highlighted a concern, refinement options were discussed. Cases where the use of models should be preferred over other existing refinement approaches were highlighted. We then practically conducted the risk assessment refinement by using 2 different models as examples. In example 1, a TK model accounting for toxicokinetics and relevant feeding patterns in the skylark and in the wood mouse was used to predict internal doses of the hypothetical active ingredient in individuals, based on relevant feeding patterns in an in-crop situation, and identify the residue levels leading to mortality. In example 2, a TK-TD model accounting for toxicokinetics, toxicodynamics, and relevant exposure patterns in the fathead minnow was used to predict the time-course of fish survival for relevant FOCUS SW exposure scenarios and identify which scenarios might lead to mortality. Models were calibrated using available standard data and implemented to simulate the time-course of internal dose of active ingredient or survival for different exposure scenarios. Simulation results were discussed and used to derive the risk assessment refinement endpoints used for decision. Finally, we compared the "classical" risk assessment approach with the model-based approach. These comparisons showed that TK and TK-TD models can bring more realism to the risk assessment through the possibility to study realistic exposure scenarios and to simulate relevant mechanisms of effects (including delayed toxicity and recovery). Noticeably, using TK-TD models is currently the most relevant way to directly connect realistic exposure patterns to effects. We conclude with recommendations on how to properly use TK and TK-TD model in acute risk assessment for vertebrates. © 2015 SETAC.

  19. 3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.

    2016-03-15

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less

  20. Princeton_TIGRESS 2.0: High refinement consistency and net gains through support vector machines and molecular dynamics in double-blind predictions during the CASP11 experiment.

    PubMed

    Khoury, George A; Smadbeck, James; Kieslich, Chris A; Koskosidis, Alexandra J; Guzman, Yannis A; Tamamis, Phanourios; Floudas, Christodoulos A

    2017-06-01

    Protein structure refinement is the challenging problem of operating on any protein structure prediction to improve its accuracy with respect to the native structure in a blind fashion. Although many approaches have been developed and tested during the last four CASP experiments, a majority of the methods continue to degrade models rather than improve them. Princeton_TIGRESS (Khoury et al., Proteins 2014;82:794-814) was developed previously and utilizes separate sampling and selection stages involving Monte Carlo and molecular dynamics simulations and classification using an SVM predictor. The initial implementation was shown to consistently refine protein structures 76% of the time in our own internal benchmarking on CASP 7-10 targets. In this work, we improved the sampling and selection stages and tested the method in blind predictions during CASP11. We added a decomposition of physics-based and hybrid energy functions, as well as a coordinate-free representation of the protein structure through distance-binning Cα-Cα distances to capture fine-grained movements. We performed parameter estimation to optimize the adjustable SVM parameters to maximize precision while balancing sensitivity and specificity across all cross-validated data sets, finding enrichment in our ability to select models from the populations of similar decoys generated for targets in CASPs 7-10. The MD stage was enhanced such that larger structures could be further refined. Among refinement methods that are currently implemented as web-servers, Princeton_TIGRESS 2.0 demonstrated the most consistent and most substantial net refinement in blind predictions during CASP11. The enhanced refinement protocol Princeton_TIGRESS 2.0 is freely available as a web server at http://atlas.engr.tamu.edu/refinement/. Proteins 2017; 85:1078-1098. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. The Mind as Black Box: A Simulation of Theory Building in Psychology.

    ERIC Educational Resources Information Center

    Hildebrandt, Carolyn; Oliver, Jennifer

    2000-01-01

    Discusses an activity that uses the metaphor "the mind is a black box," in which students work in groups to discover what is inside a sealed, black, plastic box. States that the activity enables students to understand the need for theories in psychology and to comprehend how psychologists build, test, and refine those theories. (CMK)

  2. Technical support package: Large, easily deployable structures. NASA Tech Briefs, Fall 1982, volume 7, no. 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Design and test data for packaging, deploying, and assembling structures for near term space platform systems, were provided by testing light type hardware in the Neutral Buoyancy Simulator. An optimum or near optimum structural configuration for varying degrees of deployment utilizing different levels of EVA and RMS was achieved. The design of joints and connectors and their lock/release mechanisms were refined to improve performance and operational convenience. The incorporation of utilities into structural modules to determine their effects on packaging and deployment was evaluated. By simulation tests, data was obtained for stowage, deployment, and assembly of the final structural system design to determine construction timelines, and evaluate system functioning and techniques.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulationsmore » reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.« less

  4. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  5. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  6. KSC-08pd0089

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, technicians monitor equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  7. KSC-08pd0081

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, elements of the ARES I-X Roll Control System, or RoCS, will undergo testing. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  8. KSC-08pd0092

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, technicians monitor equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  9. KSC-08pd0087

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, a technician adjusts equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  10. KSC-08pd0090

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, a technician (right) adjusts equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  11. KSC-08pd0088

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, technicians monitor equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  12. KSC-08pd0085

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, a technician monitors equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  13. KSC-08pd0091

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, a technician adjusts equipment during testing of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  14. KSC-08pd0084

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, technicians get ready to begin testing elements of the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  15. GWM-2005 - A Groundwater-Management Process for MODFLOW-2005 with Local Grid Refinement (LGR) Capability

    USGS Publications Warehouse

    Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.

    2009-01-01

    This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.

  16. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  17. A Cartesian cut cell method for rarefied flow simulations around moving obstacles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechristé, G., E-mail: Guillaume.Dechriste@math.u-bordeaux1.fr; CNRS, IMB, UMR 5251, F-33400 Talence; Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux1.fr

    2016-06-01

    For accurate simulations of rarefied gas flows around moving obstacles, we propose a cut cell method on Cartesian grids: it allows exact conservation and accurate treatment of boundary conditions. Our approach is designed to treat Cartesian cells and various kinds of cut cells by the same algorithm, with no need to identify the specific shape of each cut cell. This makes the implementation quite simple, and allows a direct extension to 3D problems. Such simulations are also made possible by using an adaptive mesh refinement technique and a hybrid parallel implementation. This is illustrated by several test cases, including amore » 3D unsteady simulation of the Crookes radiometer.« less

  18. Modifications and Modelling of the Fission Surface Power Primary Test Circuit (FSP-PTC)

    NASA Technical Reports Server (NTRS)

    Garber, Ann E.

    2008-01-01

    An actively pumped alkali metal flow circuit, designed and fabricated at the NASA Marshall Space Flight Center, underwent a range of tests at MSFC in early 2007. During this period, system transient responses and the performance of the liquid metal pump were evaluated. In May of 2007, the circuit was drained and cleaned to prepare for multiple modifications: the addition of larger upper and lower reservoirs, the installation of an annular linear induction pump (ALIP), and the inclusion of the Single Flow Cell Test Apparatus (SFCTA) in the test section. Performance of the ALIP, provided by Idaho National Laboratory (INL), will be evaluated when testing resumes. The SFCTA, which will be tested simultaneously, will provide data on alkali metal flow behavior through the simulated core channels and assist in the development of a second generation thermal simulator. Additionally, data from the first round of testing has been used to refine the working system model, developed using the Generalized Fluid System Simulation Program (GFSSP). This paper covers the modifications of the FSP-PTC and the updated GFSSP system model.

  19. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  20. Extraction-Separation Performance and Dynamic Modeling of Orion Test Vehicles with Adams Simulation: 3rd Edition

    NASA Technical Reports Server (NTRS)

    Varela, Jose G.; Reddy, Satish; Moeller, Enrique; Anderson, Keith

    2017-01-01

    NASA's Orion Capsule Parachute Assembly System (CPAS) Project is now in the qualification phase of testing, and the Adams simulation has continued to evolve to model the complex dynamics experienced during the test article extraction and separation phases of flight. The ability to initiate tests near the upper altitude limit of the Orion parachute deployment envelope requires extractions from the aircraft at 35,000 ft-MSL. Engineering development phase testing of the Parachute Test Vehicle (PTV) carried by the Carriage Platform Separation System (CPSS) at altitude resulted in test support equipment hardware failures due to increased energy caused by higher true airspeeds. As a result, hardware modifications became a necessity requiring ground static testing of the textile components to be conducted and a new ground dynamic test of the extraction system to be devised. Force-displacement curves from static tests were incorporated into the Adams simulations, allowing prediction of loads, velocities and margins encountered during both flight and ground dynamic tests. The Adams simulation was then further refined by fine tuning the damping terms to match the peak loads recorded in the ground dynamic tests. The failure observed in flight testing was successfully replicated in ground testing and true safety margins of the textile components were revealed. A multi-loop energy modulator was then incorporated into the system level Adams simulation model and the effect on improving test margins be properly evaluated leading to high confidence ground verification testing of the final design solution.

  1. Simulation Model Development for Icing Effects Flight Training

    NASA Technical Reports Server (NTRS)

    Barnhart, Billy P.; Dickes, Edward G.; Gingras, David R.; Ratvasky, Thomas P.

    2003-01-01

    A high-fidelity simulation model for icing effects flight training was developed from wind tunnel data for the DeHavilland DHC-6 Twin Otter aircraft. First, a flight model of the un-iced airplane was developed and then modifications were generated to model the icing conditions. The models were validated against data records from the NASA Twin Otter Icing Research flight test program with only minimal refinements being required. The goals of this program were to demonstrate the effectiveness of such a simulator for training pilots to recognize and recover from icing situations and to establish a process for modeling icing effects to be used for future training devices.

  2. Assessment of a human computer interface prototyping environment

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1993-01-01

    A Human Computer Interface (HCI) prototyping environment with embedded evaluation capability has been successfully assessed which will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. The HCI prototyping environment is designed to include four components: (1) a HCI format development tool, (2) a test and evaluation simulator development tool, (3) a dynamic, interactive interface between the HCI prototype and simulator, and (4) an embedded evaluation capability to evaluate the adequacy of an HCI based on a user's performance.

  3. Active damping of modal vibrations by force apportioning

    NASA Technical Reports Server (NTRS)

    Hallauer, W. L., Jr.

    1980-01-01

    Force apportioning, a method of active structural damping based on that used in modal vibration testing of isolating modes by multiple shaker excitation, was analyzed and numerically simulated. A distribution of as few forces as possible on the structure is chosen so as to maximally affect selected vibration modes while minimally exciting all other modes. The accuracy of numerical simulations of active damping, active damping of higher-frequency modes, and studies of imperfection sensitivity are discussed. The computer programs developed are described and possible refinements of the research are examined.

  4. Experimental and analytical studies of advanced air cushion landing systems

    NASA Technical Reports Server (NTRS)

    Lee, E. G. S.; Boghani, A. B.; Captain, K. M.; Rutishauser, H. J.; Farley, H. L.; Fish, R. B.; Jeffcoat, R. L.

    1981-01-01

    Several concepts are developed for air cushion landing systems (ACLS) which have the potential for improving performance characteristics (roll stiffness, heave damping, and trunk flutter), and reducing fabrication cost and complexity. After an initial screening, the following five concepts were evaluated in detail: damped trunk, filled trunk, compartmented trunk, segmented trunk, and roll feedback control. The evaluation was based on tests performed on scale models. An ACLS dynamic simulation developed earlier is updated so that it can be used to predict the performance of full-scale ACLS incorporating these refinements. The simulation was validated through scale-model tests. A full-scale ACLS based on the segmented trunk concept was fabricated and installed on the NASA ACLS test vehicle, where it is used to support advanced system development. A geometrically-scaled model (one third full scale) of the NASA test vehicle was fabricated and tested. This model, evaluated by means of a series of static and dynamic tests, is used to investigate scaling relationships between reduced and full-scale models. The analytical model developed earlier is applied to simulate both the one third scale and the full scale response.

  5. Evaluation of unrestrained replica-exchange simulations using dynamic walkers in temperature space for protein structure refinement.

    PubMed

    Olson, Mark A; Lee, Michael S

    2014-01-01

    A central problem of computational structural biology is the refinement of modeled protein structures taken from either comparative modeling or knowledge-based methods. Simulations are commonly used to achieve higher resolution of the structures at the all-atom level, yet methodologies that consistently yield accurate results remain elusive. In this work, we provide an assessment of an adaptive temperature-based replica exchange simulation method where the temperature clients dynamically walk in temperature space to enrich their population and exchanges near steep energetic barriers. This approach is compared to earlier work of applying the conventional method of static temperature clients to refine a dataset of conformational decoys. Our results show that, while an adaptive method has many theoretical advantages over a static distribution of client temperatures, only limited improvement was gained from this strategy in excursions of the downhill refinement regime leading to an increase in the fraction of native contacts. To illustrate the sampling differences between the two simulation methods, energy landscapes are presented along with their temperature client profiles.

  6. Chemical composition analysis and product consistency tests supporting refinement of the Nepheline model for the high aluminum Hanford Glass composition region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K. M.; Edwards, T. B.; Mcclane, D. L.

    2016-02-17

    In this report, SRNL provides chemical analyses and Product Consistency Test (PCT) results for a series of simulated HLW glasses fabricated by Pacific Northwest National Laboratory (PNNL) as part of an ongoing nepheline crystallization study. The results of these analyses will be used to improve the ability to predict crystallization of nepheline as a function of composition and heat treatment for glasses formulated at high alumina concentrations.

  7. Chemical composition analysis and product consistency tests supporting refinement of the Nepheline Model for the high aluminum Hanford glass composition region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, K. M.; Edwards, T. B.; Mcclane, D. L.

    2016-03-01

    In this report, Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for a series of simulated high level waste (HLW) glasses fabricated by Pacific Northwest National Laboratory (PNNL) as part of an ongoing nepheline crystallization study. The results of these analyses will be used to improve the ability to predict crystallization of nepheline as a function of composition and heat treatment for glasses formulated at high alumina concentrations.

  8. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  9. A Description of the "Crow's Foot" Tunnel Concept

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Williams, Steven P.; Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III; Norman, R. Michael

    2006-01-01

    NASA Langley Research Center has actively pursued the development and the use of pictorial or three-dimensional perspective displays of tunnel-, pathway- or highway-in-the-sky concepts for presenting flight path information to pilots in all aircraft categories (e.g., transports, General Aviation, rotorcraft) since the late 1970s. Prominent among these efforts has been the development of the crow s foot tunnel concept. The crow's foot tunnel concept emerged as the consensus pathway concept from a series of interactive workshops that brought together government and industry display designers, test pilots, and airline pilots to iteratively design, debate, and fly various pathway concepts. Over years of use in many simulation and flight test activities at NASA and elsewhere, modifications have refined and adapted the tunnel concept for different applications and aircraft categories (i.e., conventional transports, High Speed Civil Transport, General Aviation). A description of those refinements follows the definition of the original tunnel concept.

  10. MODFLOW-2005, the U.S. Geological Survey modular ground-water model - documentation of shared node local grid refinement (LGR) and the boundary flow and head (BFH) package

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2006-01-01

    This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.

  11. High-resolution coupled ice sheet-ocean modeling using the POPSICLES model

    NASA Astrophysics Data System (ADS)

    Ng, E. G.; Martin, D. F.; Asay-Davis, X.; Price, S. F.; Collins, W.

    2014-12-01

    It is expected that a primary driver of future change of the Antarctic ice sheet will be changes in submarine melting driven by incursions of warm ocean water into sub-ice shelf cavities. Correctly modeling this response on a continental scale will require high-resolution modeling of the coupled ice-ocean system. We describe the computational and modeling challenges in our simulations of the full Southern Ocean coupled to a continental-scale Antarctic ice sheet model at unprecedented spatial resolutions (0.1 degree for the ocean model and adaptive mesh refinement down to 500m in the ice sheet model). The POPSICLES model couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), with the BISICLES ice-sheet model (Cornford et al., 2012) using a synchronous offline-coupling scheme. Part of the PISCEES SciDAC project and built on the Chombo framework, BISICLES makes use of adaptive mesh refinement to fully resolve dynamically-important regions like grounding lines and employs a momentum balance similar to the vertically-integrated formulation of Schoof and Hindmarsh (2009). Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests like MISMIP3D (Pattyn et al., 2013) and realistic configurations (Favier et al. 2014). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). For the POPSICLES Antarctic-Southern Ocean simulations, ice sheet and ocean models communicate at one-month coupling intervals.

  12. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  13. Effect of Ultra-Fast Cooling on Microstructure and Properties of High Strength Steel for Shipbuilding

    NASA Astrophysics Data System (ADS)

    Zhou, Cheng; Ye, Qibin; Yan, Ling

    The effect of ultra-fast cooling(UFC) and conventional accelerated cooling(AcC) on the mechanical properties and microstructure of controlled rolled AH32 grade steel plates on industrial scale were compared using tensile test, Charpy impact test, welding thermal simulation, and microscopic analysis. The results show that the properties of the plate produced by UFC are improved considerably comparing to that by AcC. The yield strength is increased with 54 MPa without deterioration in the ductility and the impact energy is improved to more than 260 J at -60 °C with much lower ductile-to-brittle transition temperature(DBTT). The ferrite grain size is refined to ASTM No. 11.5 in the UFC steel with uniform microstructure throughout the thickness direction, while that of the AcC steel is ASTM No. 9.5. The analysis of nucleation kinetics of α-ferrite indicates that the microstructure is refined due to the increased nucleation rate of α-ferrite by much lower γ→α transition temperature through the UFC process. The Hall-Petch effect is quantified for the improvement of the strength and toughness of the UFC steel attributed to the grain refinement.

  14. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  15. STARE CubeSat Communications Testing, Simulation and Analysis

    DTIC Science & Technology

    2012-09-01

    26  Figure 24.  STK MC3 Ground Station Locations ................................................... 31  x THIS PAGE INTENTIONALLY LEFT BLANK xi...Refinement of Ephemeris STK Satellite Tool Kit VPN Virtual Private Network xiv THIS PAGE INTENTIONALLY LEFT BLANK xv ACKNOWLEDGMENTS This...the radio itself. Using a signal attenuator to decrease signal strength by 10 dB increments, and a spectrum analyzer to see a visual representation

  16. Geohydrology of, and simulation of ground-water flow in, the Milford-Souhegan glacial-drift aquifer, Milford, New Hampshire

    USGS Publications Warehouse

    Harte, P.T.; Mack, Thomas J.

    1992-01-01

    Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.

  17. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  18. An adaptively refined phase-space element method for cosmological simulations and collisionless dynamics

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Angulo, Raul E.

    2016-01-01

    N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (I) explicit tracking of the fine-grained distribution function, (II) natural representation of caustics, (III) intrinsically smooth gravitational potential fields, thus (IV) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.

  19. Simulation of the XV-15 tilt rotor research aircraft

    NASA Technical Reports Server (NTRS)

    Churchill, G. B.; Dugan, D. C.

    1982-01-01

    The effective use of simulation from issuance of the request for proposal through conduct of a flight test program for the XV-15 Tilt Rotor Research Aircraft is discussed. From program inception, simulation complemented all phases of XV-15 development. The initial simulation evaluations during the source evaluation board proceedings contributed significantly to performance and stability and control evaluations. Eight subsequent simulation periods provided major contributions in the areas of control concepts; cockpit configuration; handling qualities; pilot workload; failure effects and recovery procedures; and flight boundary problems and recovery procedures. The fidelity of the simulation also made it a valuable pilot training aid, as well as a suitable tool for military and civil mission evaluations. Simulation also provided valuable design data for refinement of automatic flight control systems. Throughout the program, fidelity was a prime issue and resulted in unique data and methods for fidelity evaluation which are presented and discussed.

  20. Simulation of Auger electron emission from nanometer-size gold targets using the Geant4 Monte Carlo simulation toolkit

    NASA Astrophysics Data System (ADS)

    Incerti, S.; Suerfu, B.; Xu, J.; Ivantchenko, V.; Mantero, A.; Brown, J. M. C.; Bernal, M. A.; Francis, Z.; Karamitros, M.; Tran, H. N.

    2016-04-01

    A revised atomic deexcitation framework for the Geant4 general purpose Monte Carlo toolkit capable of simulating full Auger deexcitation cascades was implemented in June 2015 release (version 10.2 Beta). An overview of this refined framework and testing of its capabilities is presented for the irradiation of gold nanoparticles (NP) with keV photon and MeV proton beams. The resultant energy spectra of secondary particles created within and that escape the NP are analyzed and discussed. It is anticipated that this new functionality will improve and increase the use of Geant4 in the medical physics, radiobiology, nanomedicine research and other low energy physics fields.

  1. Refinements to the Graves and Pitarka (2010) Broadband Ground-Motion Simulation Method

    DOE PAGES

    Graves, Robert; Pitarka, Arben

    2014-12-17

    This brief article describes refinements to the Graves and Pitarka (2010) broadband ground-motion simulation methodology (GP2010 hereafter) that have been implemented in version 14.3 of the Southern California Earthquake Center (SCEC) Broadband Platform (BBP). The updated version of our method on the current SCEC BBP is referred to as GP14.3. Here, our simulation technique is a hybrid approach that combines low- and high-frequency motions computed with different methods into a single broadband response.

  2. KSC-08pd0083

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, technicians look at some of the elements to be tested in the Ares I-X Roll Control System, or RoCS. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  3. KSC-08pd0082

    NASA Image and Video Library

    2008-01-24

    KENNEDY SPACE CENTER, FLA. -- In the hypergolic maintenance facility at NASA's Kennedy Space Center, some of the internal elements seen here of the ARES I-X Roll Control System, or RoCS, will undergo testing. The RoCS Servicing Simulation Test is to gather data that will be used to help certify the ground support equipment design and validate the servicing requirements and processes. The RoCS is part of the Interstage structure, the lowest axial segment of the Upper Stage Simulator. In an effort to reduce costs and meet the schedule, most of the ground support equipment that will be used for the RoCS servicing is of space shuttle heritage. This high-fidelity servicing simulation will provide confidence that servicing requirements can be met with the heritage system. At the same time, the test will gather process data that will be used to modify or refine the equipment and processes to be used for the actual flight element. Photo credit: NASA/Kim Shiflett

  4. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  5. Pilot Aircraft Interface Objectives/Rationale

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2010-01-01

    Objective: Database and proof of concept for guidelines for GCS compliance a) Rationale: 1) Provide research test-bed to develop guidelines. 2) Modify GCS for NAS Compliance to provide proof of concept. b) Approach: 1) Assess current state of GCS technology. 2) Information Requirements Definition. 3) SME Workshop. 4) Modify an Existing GCS for NAS Compliance. 5) Define exemplar UAS (choose system to develop prototype). 6) Define Candidate Displays & Controls. 7) Evaluate/ refine in Simulations. 8) Demonstrate in flight. c) Deliverables: 1) Information Requirements Report. 2) Workshop Proceedings. 3) Technical Reports/ papers on Simulations & Flight Demo. 4) Database for guidelines.

  6. Experimental clean combustor program, alternate fuels addendum, phase 2

    NASA Technical Reports Server (NTRS)

    Gleason, C. C.; Bahr, D. W.

    1976-01-01

    The characteristics of current and advanced low-emissions combustors when operated with special test fuels simulating broader range combustion properties of petroleum or coal derived fuels were studied. Five fuels were evaluated; conventional JP-5, conventional No. 2 Diesel, two different blends of Jet A and commercial aromatic mixtures - zylene bottoms and haphthalene charge stock, and a fuel derived from shale oil crude which was refined to Jet A specifications. Three CF6-50 engine size combustor types were evaluated; the standard production combustor, a radial/axial staged combustor, and a double annular combustor. Performance and pollutant emissons characteristics at idle and simulated takeoff conditions were evaluated in a full annular combustor rig. Altitude relight characteristics were evaluated in a 60 degree sector combustor rig. Carboning and flashback characteristics at simulated takeoff conditions were evaluated in a 12 degree sector combustor rig. For the five fuels tested, effects were moderate, but well defined.

  7. Materials Evaluation in the Tri-Service Thermal Radiation Test Facility.

    DTIC Science & Technology

    1981-02-28

    degradation of materials exposed to the radiant heating generated by a nuclear blast can vary enor- mously. The intense radiation needed to simulate a...of surface degradation was accomplished with limited success during the current contract effort. Procedures still need refining to make surface...147; 148; 149 (Table I) 6648-6666 FACILITY CALIBRATION 6667 Aluminized Tape No coating 6668-6742 Aluminum NBR /EDPM blends, Vamac 6743-6755 Wind tunnel

  8. Response of the Antarctic ice sheet to ocean forcing using the POPSICLES coupled ice sheet-ocean model

    NASA Astrophysics Data System (ADS)

    Martin, D. F.; Asay-Davis, X.; Price, S. F.; Cornford, S. L.; Maltrud, M. E.; Ng, E. G.; Collins, W.

    2014-12-01

    We present the response of the continental Antarctic ice sheet to sub-shelf-melt forcing derived from POPSICLES simulation results covering the full Antarctic Ice Sheet and the Southern Ocean spanning the period 1990 to 2010. Simulations are performed at 0.1 degree (~5 km) ocean resolution and ice sheet resolution as fine as 500 m using adaptive mesh refinement. A comparison of fully-coupled and comparable standalone ice-sheet model results demonstrates the importance of two-way coupling between the ice sheet and the ocean. The POPSICLES model couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), and the BISICLES ice-sheet model (Cornford et al., 2012). BISICLES makes use of adaptive mesh refinement to fully resolve dynamically-important regions like grounding lines and employs a momentum balance similar to the vertically-integrated formulation of Schoof and Hindmarsh (2009). Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests like MISMIP3D (Pattyn et al., 2013) and realistic configurations (Favier et al. 2014). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). A companion presentation, "Present-day circum-Antarctic simulations using the POPSICLES coupled land ice-ocean model" in session C027 describes the ocean-model perspective of this work, while we focus on the response of the ice sheet and on details of the model. The figure shows the BISICLES-computed vertically-integrated ice velocity field about 1 month into a 20-year coupled Antarctic run. Groundling lines are shown in green.

  9. High Fidelity Thermal Simulators for Non-Nuclear Testing: Analysis and Initial Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David

    2007-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power system, providing system characterization data and allowing one to work through various fabrication, assembly and integration issues without the cost and time associated with a full ground nuclear test. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Testing with non-optimized heater elements allows one to assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. High fidelity thermal simulators that match both the static and the dynamic fuel pin performance that would be observed in an operating, fueled nuclear reactor can vastly increase the value of non-nuclear test results. With optimized simulators, the integration of thermal hydraulic hardware tests with simulated neutronie response provides a bridge between electrically heated testing and fueled nuclear testing, providing a better assessment of system integration issues, characterization of integrated system response times and response characteristics, and assessment of potential design improvements' at a relatively small fiscal investment. Initial conceptual thermal simulator designs are determined by simple one-dimensional analysis at a single axial location and at steady state conditions; feasible concepts are then input into a detailed three-dimensional model for comparison to expected fuel pin performance. Static and dynamic fuel pin performance for a proposed reactor design is determined using SINDA/FLUINT thermal analysis software, and comparison is made between the expected nuclear performance and the performance of conceptual thermal simulator designs. Through a series of iterative analyses, a conceptual high fidelity design can developed. Test results presented in this paper correspond to a "first cut" simulator design for a potential liquid metal (NaK) cooled reactor design that could be applied for Lunar surface power. Proposed refinements to this simulator design are also presented.

  10. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  11. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE PAGES

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby; ...

    2016-10-22

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  12. Sources and pathways of the upscale effects on the Southern Hemisphere jet in MPAS-CAM4 variable-resolution simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakaguchi, Koichi; Lu, Jian; Leung, L. Ruby

    Impacts of regional grid refinement on large-scale circulations (“upscale effects”) were detected in a previous study that used the Model for Prediction Across Scales-Atmosphere coupled to the physics parameterizations of the Community Atmosphere Model version 4. The strongest upscale effect was identified in the Southern Hemisphere jet during austral winter. This study examines the detailed underlying processes by comparing two simulations at quasi-uniform resolutions of 30 and 120 km to three variable-resolution simulations in which the horizontal grids are regionally refined to 30 km in North America, South America, or Asia from 120 km elsewhere. In all the variable-resolution simulations,more » precipitation increases in convective areas inside the high-resolution domains, as in the reference quasi-uniform high-resolution simulation. With grid refinement encompassing the tropical Americas, the increased condensational heating expands the local divergent circulations (Hadley cell) meridionally such that their descending branch is shifted poleward, which also pushes the baroclinically unstable regions, momentum flux convergence, and the eddy-driven jet poleward. This teleconnection pathway is not found in the reference high-resolution simulation due to a strong resolution sensitivity of cloud radiative forcing that dominates the aforementioned teleconnection signals. The regional refinement over Asia enhances Rossby wave sources and strengthens the upper level southerly flow, both facilitating the cross-equatorial propagation of stationary waves. Evidence indicates that this teleconnection pathway is also found in the reference high-resolution simulation. Lastly, the result underlines the intricate diagnoses needed to understand the upscale effects in global variable-resolution simulations, with implications for science investigations using the computationally efficient modeling framework.« less

  13. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  14. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations

    PubMed Central

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation. PMID:27227775

  15. Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations.

    PubMed

    Yang, Kecheng; Różycki, Bartosz; Cui, Fengchao; Shi, Ce; Chen, Wenduo; Li, Yunqi

    2016-01-01

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.

  16. MODFLOW-LGR-Modifications to the streamflow-routing package (SFR2) to route streamflow through locally refined grids

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.

  17. A mass and momentum conserving unsplit semi-Lagrangian framework for simulating multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owkes, Mark, E-mail: mark.owkes@montana.edu; Desjardins, Olivier

    In this work, we present a computational methodology for convection and advection that handles discontinuities with second order accuracy and maintains conservation to machine precision. This method can transport a variety of discontinuous quantities and is used in the context of an incompressible gas–liquid flow to transport the phase interface, momentum, and scalars. The proposed method provides a modification to the three-dimensional, unsplit, second-order semi-Lagrangian flux method of Owkes & Desjardins (JCP, 2014). The modification adds a refined grid that provides consistent fluxes of mass and momentum defined on a staggered grid and discrete conservation of mass and momentum, evenmore » for flows with large density ratios. Additionally, the refined grid doubles the resolution of the interface without significantly increasing the computational cost over previous non-conservative schemes. This is possible due to a novel partitioning of the semi-Lagrangian fluxes into a small number of simplices. The proposed scheme is tested using canonical verification tests, rising bubbles, and an atomizing liquid jet.« less

  18. Simulation of Auger electron emission from nanometer-size gold targets using the Geant4 Monte Carlo simulation toolkit

    DOE PAGES

    Incerti, S.; Suerfu, B.; Xu, J.; ...

    2016-02-16

    We report that a revised atomic deexcitation framework for the Geant4 general purpose Monte Carlo toolkit capable of simulating full Auger deexcitation cascades was implemented in June 2015 release (version 10.2 Beta). An overview of this refined framework and testing of its capabilities is presented for the irradiation of gold nanoparticles (NP) with keV photon and MeV proton beams. The resultant energy spectra of secondary particles created within and that escape the NP are analyzed and discussed. It is anticipated that this new functionality will improve and increase the use of Geant4 in the medical physics, radiobiology, nanomedicine research andmore » other low energy physics fields.« less

  19. JT9D performance deterioration results from a simulated aerodynamic load test

    NASA Technical Reports Server (NTRS)

    Stakolich, E. G.; Stromberg, W. J.

    1981-01-01

    The results of testing to identify the effects of simulated aerodynamic flight loads on JT9D engine performance are presented. The test results were also used to refine previous analytical studies on the impact of aerodynamic flight loads on performance losses. To accomplish these objectives, a JT9D-7AH engine was assembled with average production clearances and new seals as well as extensive instrumentation to monitor engine performance, case temperatures, and blade tip clearance changes. A special loading device was designed and constructed to permit application of known moments and shear forces to the engine by the use of cables placed around the flight inlet. The test was conducted in the Pratt & Whitney Aircraft X-Ray Test Facility to permit the use of X-ray techniques in conjunction with laser blade tip proximity probes to monitor important engine clearance changes. Upon completion of the test program, the test engine was disassembled, and the condition of gas path parts and final clearances were documented. The test results indicate that the engine lost 1.1 percent in thrust specific fuel consumption (TSFC), as measured under sea level static conditions, due to increased operating clearances caused by simulated flight loads. This compares with 0.9 percent predicted by the analytical model and previous study efforts.

  20. Can competing diversity indices inform us about why ethnic diversity erodes social cohesion? A test of five diversity indices in Germany.

    PubMed

    Schaeffer, Merlin

    2013-05-01

    An ever-growing number of studies investigates the relation between ethnic diversity and social cohesion, but these studies have produced mixed results. In cross-national research, some scholars have recently started to investigate more refined and informative indices of ethnic diversity than the commonly used Hirschman-Herfindahl Index. These refined indices allow to test competing theoretical explanations of why ethnic diversity is associated with declines in social cohesion. This study assesses the applicability of this approach for sub-national analyses. Generally, the results confirm a negative association between social cohesion and ethnic diversity. However, the competing indices are empirically indistinguishable and thus insufficient to test different theories against one another. Follow-up simulations suggest the general conclusion that the competing indices are meaningful operationalizations only if a sample includes: (1) contextual units with small and contextual units with large minority shares, as well as (2) contextual units with diverse and contextual units with polarized ethnic compositions. The results are thus instructive to all researchers who wish to apply different diversity indices and thereby test competing theories. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Evaluation of tropical channel refinement using MPAS-A aquaplanet simulations

    DOE PAGES

    Martini, Matus N.; Gustafson, Jr., William I.; O'Brien, Travis A.; ...

    2015-09-13

    Climate models with variable-resolution grids offer a computationally less expensive way to provide more detailed information at regional scales and increased accuracy for processes that cannot be resolved by a coarser grid. This study uses the Model for Prediction Across Scales–Atmosphere (MPAS22A), consisting of a nonhydrostatic dynamical core and a subset of Advanced Research Weather Research and Forecasting (ARW-WRF) model atmospheric physics that have been modified to include the Community Atmosphere Model version 5 (CAM5) cloud fraction parameterization, to investigate the potential benefits of using increased resolution in an tropical channel. The simulations are performed with an idealized aquaplanet configurationmore » using two quasi-uniform grids, with 30 km and 240 km grid spacing, and two variable-resolution grids spanning the same grid spacing range; one with a narrow (20°S–20°N) and one with a wide (30°S–30°N) tropical channel refinement. Results show that increasing resolution in the tropics impacts both the tropical and extratropical circulation. Compared to the quasi-uniform coarse grid, the narrow-channel simulation exhibits stronger updrafts in the Ferrel cell as well as in the middle of the upward branch of the Hadley cell. The wider tropical channel has a closer correspondence to the 30 km quasi-uniform simulation. However, the total atmospheric poleward energy transports are similar in all simulations. The largest differences are in the low-level cloudiness. The refined channel simulations show improved tropical and extratropical precipitation relative to the global 240 km simulation when compared to the global 30 km simulation. All simulations have a single ITCZ. Furthermore, the relatively small differences in mean global and tropical precipitation rates among the simulations are a promising result, and the evidence points to the tropical channel being an effective method for avoiding the extraneous numerical artifacts seen in earlier studies that only refined portion of the tropics.« less

  2. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  3. High Fidelity, Fuel-Like Thermal Simulators for Non-Nuclear Testing: Analysis and Initial Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Kapernick, Richard

    2007-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power system, providing system characterization data and allowing one to work through various fabrication, assembly and integration issues without the cost and time associated with a full ground nuclear test. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Testing with non-optimized heater elements allows one to assess thermal, heat transfer. and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. High fidelity thermal simulators that match both the static and the dynamic fuel pin performance that would be observed in an operating, fueled nuclear reactor can vastly increase the value of non-nuclear test results. With optimized simulators, the integration of thermal hydraulic hardware tests with simulated neutronic response provides a bridge between electrically heated testing and fueled nuclear testing. By implementing a neutronic response model to simulate the dynamic response that would be expected in a fueled reactor system, one can better understand system integration issues, characterize integrated system response times and response characteristics and assess potential design improvements at relatively small fiscal investment. Initial conceptual thermal simulator designs are determined by simple one-dimensional analysis at a single axial location and at steady state conditions; feasible concepts are then input into a detailed three-dimensional model for comparison to expected fuel pin performance. Static and dynamic fuel pin performance for a proposed reactor design is determined using SINDA/FLUINT thermal analysis software, and comparison is made between the expected nuclear performance and the performance of conceptual thermal simulator designs. Through a series of iterative analyses, a conceptual high fidelity design is developed: this is followed by engineering design, fabrication, and testing to validate the overall design process. Test results presented in this paper correspond to a "first cut" simulator design for a potential liquid metal (NaK) cooled reactor design that could be applied for Lunar surface power. Proposed refinements to this simulator design are also presented.

  4. V-SUIT Model Validation Using PLSS 1.0 Test Results

    NASA Technical Reports Server (NTRS)

    Olthoff, Claas

    2015-01-01

    The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.

  5. Simulating Space Capsule Water Landing with Explicit Finite Element Method

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lyle, Karen H.

    2007-01-01

    A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.

  6. Engineering uses of physics-based ground motion simulations

    USGS Publications Warehouse

    Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.

    2014-01-01

    This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.

  7. Temporal Evolution of the Plasma Sheath Surrounding Solar Cells in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Willis, Emily M.; Pour, Maria Z. A.

    2017-01-01

    Initial results from the PIC simulation and the LEM simulation have been presented. The PIC simulation results show that more detailed study is required to refine the ISS solar array current collection model and to understand the development of the current collection in time. The initial results from the LEM demonstrate that is it possible the transients are caused by solar array interaction with the environment, but there are presently too many assumptions in the model to be certain. Continued work on the PIC simulation will provide valuable information on the development of the barrier potential, which will allow refinement the LEM simulation and a better understanding of the causes and effects of the transients.

  8. Guidance simulation and test support for differential GPS flight experiment

    NASA Technical Reports Server (NTRS)

    Geier, G. J.; Loomis, P. V. W.; Cabak, A.

    1987-01-01

    Three separate tasks which supported the test preparation, test operations, and post test analysis of the NASA Ames flight test evaluation of the differential Global Positioning System (GPS) are presented. Task 1 consisted of a navigation filter design, coding, and testing to optimally make use of GPS in a differential mode. The filter can be configured to accept inputs from external censors such as an accelerometer and a barometric or radar altimeter. The filter runs in real time onboard a NASA helicopter. It processes raw pseudo and delta range measurements from a single channel sequential GPS receiver. The Kalman filter software interfaces are described in detail, followed by a description of the filter algorithm, including the basic propagation and measurement update equations. The performance during flight tests is reviewed and discussed. Task 2 describes a refinement performed on the lateral and vertical steering algorithms developed on a previous contract. The refinements include modification of the internal logic to allow more diverse inflight initialization procedures, further data smoothing and compensation for system induced time delays. Task 3 describes the TAU Corp participation in the analysis of the real time Kalman navigation filter. The performance was compared to that of the Z-set filter in flight and to the laser tracker position data during post test analysis. This analysis allowed a more optimum selection of the parameters of the filter.

  9. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  10. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  11. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  12. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10

    PubMed Central

    Zhang, Yang

    2014-01-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925

  13. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10.

    PubMed

    Zhang, Yang

    2014-02-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.

  14. Physical Simulation of a Duplex Stainless Steel Friction Stir Welding by the Numerical and Experimental Analysis of Hot Torsion Tests

    NASA Astrophysics Data System (ADS)

    da Fonseca, Eduardo Bertoni; Santos, Tiago Felipe Abreu; Button, Sergio Tonini; Ramirez, Antonio Jose

    2016-09-01

    Physical simulation of friction stir welding (FSW) by means of hot torsion tests was performed on UNS S32205 duplex stainless steel. A thermomechanical simulator Gleeble 3800® with a custom-built liquid nitrogen cooling system was employed to reproduce the thermal cycle measured during FSW and carry out the torsion tests. Microstructures were compared by means of light optical microscopy and electron backscatter diffraction. True strain and strain rate were calculated by numerical simulation of the torsion tests. Thermomechanically affected zone (TMAZ) was reproduced at peak temperature of 1303 K (1030 °C), rotational speeds of 52.4 rad s-1 (500 rpm) and 74.5 rad s-1 (750 rpm), and 0.5 to 0.75 revolutions, which represent strain rate between 10 and 16 s-1 and true strain between 0.5 and 0.8. Strong grain refinement, similar to the one observed in the stir zone (SZ), was attained at peak temperature of 1403 K (1130 °C), rotational speed of 74.5 rad s-1 (750 rpm), and 1.2 revolution, which represent strain rate of 19 s-1 and true strain of 1.3. Continuous dynamic recrystallization in ferrite and dynamic recrystallization in austenite were observed in the TMAZ simulation. At higher temperature, dynamic recovery of austenite was also observed.

  15. Mechanism test bed. Flexible body model report

    NASA Technical Reports Server (NTRS)

    Compton, Jimmy

    1991-01-01

    The Space Station Mechanism Test Bed is a six degree-of-freedom motion simulation facility used to evaluate docking and berthing hardware mechanisms. A generalized rigid body math model was developed which allowed the computation of vehicle relative motion in six DOF due to forces and moments from mechanism contact, attitude control systems, and gravity. No vehicle size limitations were imposed in the model. The equations of motion were based on Hill's equations for translational motion with respect to a nominal circular earth orbit and Newton-Euler equations for rotational motion. This rigid body model and supporting software were being refined.

  16. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Gonder, Jeff; Jehlik, Forrest

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  17. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric; Gonder, Jeffrey; Jehlik, Forrest

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Here, model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  18. On-Road Validation of a Simplified Model for Estimating Real-World Fuel Economy

    DOE PAGES

    Wood, Eric; Gonder, Jeffrey; Jehlik, Forrest

    2017-03-28

    On-road fuel economy is known to vary significantly between individual trips in real-world driving conditions. This work introduces a methodology for rapidly simulating a specific vehicle's fuel economy over the wide range of real-world conditions experienced across the country. On-road test data collected using a highly instrumented vehicle is used to refine and validate this modeling approach. Here, model accuracy relative to on-road data collection is relevant to the estimation of 'off-cycle credits' that compensate for real-world fuel economy benefits that are not observed during certification testing on a chassis dynamometer.

  19. Practical aspects of modeling aircraft dynamics from flight data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1984-01-01

    The purpose of parameter estimation, a subset of system identification, is to estimate the coefficients (such as stability and control derivatives) of the aircraft differential equations of motion from sampled measured dynamic responses. In the past, the primary reason for estimating stability and control derivatives from flight tests was to make comparisons with wind tunnel estimates. As aircraft became more complex, and as flight envelopes were expanded to include flight regimes that were not well understood, new requirements for the derivative estimates evolved. For many years, the flight determined derivatives were used in simulations to aid in flight planning and in pilot training. The simulations were particularly important in research flight test programs in which an envelope expansion into new flight regimes was required. Parameter estimation techniques for estimating stability and control derivatives from flight data became more sophisticated to support the flight test programs. As knowledge of these new flight regimes increased, more complex aircraft were flown. Much of this increased complexity was in sophisticated flight control systems. The design and refinement of the control system required higher fidelity simulations than were previously required.

  20. Penetration of rod projectiles in semi-infinite targets : a validation test for Eulerian X-FEM in ALEGRA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Byoung Yoon; Leavy, Richard Brian; Niederhaus, John Henry J.

    2013-03-01

    The finite-element shock hydrodynamics code ALEGRA has recently been upgraded to include an X-FEM implementation in 2D for simulating impact, sliding, and release between materials in the Eulerian frame. For validation testing purposes, the problem of long-rod penetration in semi-infinite targets is considered in this report, at velocities of 500 to 3000 m/s. We describe testing simulations done using ALEGRA with and without the X-FEM capability, in order to verify its adequacy by showing X-FEM recovers the good results found with the standard ALEGRA formulation. The X-FEM results for depth of penetration differ from previously measured experimental data by lessmore » than 2%, and from the standard formulation results by less than 1%. They converge monotonically under mesh refinement at first order. Sensitivities to domain size and rear boundary condition are investigated and shown to be small. Aside from some simulation stability issues, X-FEM is found to produce good results for this classical impact and penetration problem.« less

  1. Advective transport observations with MODPATH-OBS--documentation of the MODPATH observation process

    USGS Publications Warehouse

    Hanson, R.T.; Kauffman, L.K.; Hill, M.C.; Dickinson, J.E.; Mehl, S.W.

    2013-01-01

    The MODPATH-OBS computer program described in this report is designed to calculate simulated equivalents for observations related to advective groundwater transport that can be represented in a quantitative way by using simulated particle-tracking data. The simulated equivalents supported by MODPATH-OBS are (1) distance from a source location at a defined time, or proximity to an observed location; (2) time of travel from an initial location to defined locations, areas, or volumes of the simulated system; (3) concentrations used to simulate groundwater age; and (4) percentages of water derived from contributing source areas. Although particle tracking only simulates the advective component of conservative transport, effects of non-conservative processes such as retardation can be approximated through manipulation of the effective-porosity value used to calculate velocity based on the properties of selected conservative tracers. This program can also account for simple decay or production, but it cannot account for diffusion. Dispersion can be represented through direct simulation of subsurface heterogeneity and the use of many particles. MODPATH-OBS acts as a postprocessor to MODPATH, so that the sequence of model runs generally required is MODFLOW, MODPATH, and MODPATH-OBS. The version of MODFLOW and MODPATH that support the version of MODPATH-OBS presented in this report are MODFLOW-2005 or MODFLOW-LGR, and MODPATH-LGR. MODFLOW-LGR is derived from MODFLOW-2005, MODPATH 5, and MODPATH 6 and supports local grid refinement. MODPATH-LGR is derived from MODPATH 5. It supports the forward and backward tracking of particles through locally refined grids and provides the output needed for MODPATH_OBS. For a single grid and no observations, MODPATH-LGR results are equivalent to MODPATH 5. MODPATH-LGR and MODPATH-OBS simulations can use nearly all of the capabilities of MODFLOW-2005 and MODFLOW-LGR; for example, simulations may be steady-state, transient, or a combination. Though the program name MODPATH-OBS specifically refers to observations, the program also can be used to calculate model prediction of observations. MODPATH-OBS is primarily intended for use with separate programs that conduct sensitivity analysis, data needs assessment, parameter estimation, and uncertainty analysis, such as UCODE_2005, and PEST. In many circumstances, refined grids in selected parts of a model are important to simulated hydraulics, detailed inflows and outflows, or other system characteristics. MODFLOW-LGR and MODPATH-LGR support accurate local grid refinement in which both mass (flows) and energy (head) are conserved across the local grid boundary. MODPATH-OBS is designed to take advantage of these capabilities. For example, particles tracked between a pumping well and a nearby stream, which are simulated poorly if a river and well are located in a single large grid cell, can be simulated with improved accuracy using a locally refined grid in MODFLOW-LGR, MODPATH-LGR, and MODPATH-OBS. The locally-refined-grid approach can provide more accurate simulated equivalents to observed transport between the well and the river. The documentation presented here includes a brief discussion of previous work, description of the methods, and detailed descriptions of the required input files and how the output files are typically used.

  2. Adaptive temporal refinement in injection molding

    NASA Astrophysics Data System (ADS)

    Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek

    2018-05-01

    Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.

  3. Development and Application of a Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  4. DiffPy-CMI-Python libraries for Complex Modeling Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billinge, Simon; Juhas, Pavol; Farrow, Christopher

    2014-02-01

    Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.

  5. Neutron powder diffraction and molecular simulation study of the structural evolution of ammonia borane from 15 to 340 K.

    PubMed

    Hess, Nancy J; Schenter, Gregory K; Hartman, Michael R; Daemen, Luc L; Proffen, Thomas; Kathmann, Shawn M; Mundy, Christopher J; Hartl, Monika; Heldebrant, David J; Stowe, Ashley C; Autrey, Tom

    2009-05-14

    The structural behavior of (11)B-, (2)H-enriched ammonia borane, ND(3)(11)BD(3), over the temperature range from 15 to 340 K was investigated using a combination of neutron powder diffraction and ab initio molecular dynamics simulations. In the low temperature orthorhombic phase, the progressive displacement of the borane group under the amine group was observed leading to the alignment of the B-N bond near parallel to the c-axis. The orthorhombic to tetragonal structural phase transition at 225 K is marked by dramatic change in the dynamics of both the amine and borane group. The resulting hydrogen disorder is problematic to extract from the metrics provided by Rietveld refinement but is readily apparent in molecular dynamics simulation and in difference Fourier transform maps. At the phase transition, Rietveld refinement does indicate a disruption of one of two dihydrogen bonds that link adjacent ammonia borane molecules. Metrics determined by Rietveld refinement are in excellent agreement with those determined from molecular simulation. This study highlights the valuable insights added by coupled experimental and computational studies.

  6. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  7. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  8. Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement

    DOE PAGES

    Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas

    2018-06-07

    In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less

  9. Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas

    In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less

  10. An adaptive mesh refinement-multiphase lattice Boltzmann flux solver for simulation of complex binary fluid flows

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Wang, Y.; Shu, C.

    2017-12-01

    This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.

  11. MFIX simulation of NETL/PSRI challenge problem of circulating fluidized bed

    DOE PAGES

    Li, Tingwen; Dietiker, Jean-François; Shahnam, Mehrdad

    2012-12-01

    In this paper, numerical simulations of NETL/PSRI challenge problem of circulating fluidized bed (CFB) using the open-source code Multiphase Flow with Interphase eXchange (MFIX) are reported. Two rounds of simulation results are reported including the first-round blind test and the second-round modeling refinement. Three-dimensional high fidelity simulations are conducted to model a 12-inch diameter pilot-scale CFB riser. Detailed comparisons between numerical results and experimental data are made with respect to axial pressure gradient profile, radial profiles of solids velocity and solids mass flux along different radial directions at various elevations for operating conditions covering different fluidization regimes. Overall, the numericalmore » results show that CFD can predict the complex gas–solids flow behavior in the CFB riser reasonably well. In addition, lessons learnt from modeling this challenge problem are presented.« less

  12. A Multiprocessor Operating System Simulator

    NASA Technical Reports Server (NTRS)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  13. Control structural interaction testbed: A model for multiple flexible body verification

    NASA Technical Reports Server (NTRS)

    Chory, M. A.; Cohen, A. L.; Manning, R. A.; Narigon, M. L.; Spector, V. A.

    1993-01-01

    Conventional end-to-end ground tests for verification of control system performance become increasingly complicated with the development of large, multiple flexible body spacecraft structures. The expense of accurately reproducing the on-orbit dynamic environment and the attendant difficulties in reducing and accounting for ground test effects limits the value of these tests. TRW has developed a building block approach whereby a combination of analysis, simulation, and test has replaced end-to-end performance verification by ground test. Tests are performed at the component, subsystem, and system level on engineering testbeds. These tests are aimed at authenticating models to be used in end-to-end performance verification simulations: component and subassembly engineering tests and analyses establish models and critical parameters, unit level engineering and acceptance tests refine models, and subsystem level tests confirm the models' overall behavior. The Precision Control of Agile Spacecraft (PCAS) project has developed a control structural interaction testbed with a multibody flexible structure to investigate new methods of precision control. This testbed is a model for TRW's approach to verifying control system performance. This approach has several advantages: (1) no allocation for test measurement errors is required, increasing flight hardware design allocations; (2) the approach permits greater latitude in investigating off-nominal conditions and parametric sensitivities; and (3) the simulation approach is cost effective, because the investment is in understanding the root behavior of the flight hardware and not in the ground test equipment and environment.

  14. GRAMM-X public web server for protein–protein docking

    PubMed Central

    Tovchigrechko, Andrey; Vakser, Ilya A.

    2006-01-01

    Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016

  15. A user-centred design process of new cold-protective clothing for offshore petroleum workers operating in the Barents Sea

    PubMed Central

    NAESGAARD, Ole Petter; STORHOLMEN, Tore Christian Bjørsvik; WIGGEN, Øystein Nordrum; REITAN, Jarl

    2017-01-01

    Petroleum operations in the Barents Sea require personal protective clothing (PPC) to ensure the safety and performance of the workers. This paper describes the accomplishment of a user-centred design process of new PPC for offshore workers operating in this area. The user-centred design process was accomplished by mixed-methods. Insights into user needs and context of use were established by group interviews and on-the-job observations during a field-trip. The design was developed based on these insights, and refined by user feedback and participatory design. The new PPC was evaluated via field-tests and cold climate chamber tests. The insight into user needs and context of use provided useful input to the design process and contributed to tailored solutions. Providing users with clothing prototypes facilitated participatory design and iterations of design refinement. The group interviews following the final field test showed consensus of enhanced user satisfaction compared to PPC in current use. The final cold chamber test indicated that the new PPC provides sufficient thermal protection during the 60 min of simulated work in a wind-chill temperature of −25°C. Conclusion: Accomplishing a user-centred design process contributed to new PPC with enhanced user satisfaction and included relevant functional solutions. PMID:29046494

  16. A user-centred design process of new cold-protective clothing for offshore petroleum workers operating in the Barents Sea.

    PubMed

    Naesgaard, Ole Petter; Storholmen, Tore Christian Bjørsvik; Wiggen, Øystein Nordrum; Reitan, Jarl

    2017-12-07

    Petroleum operations in the Barents Sea require personal protective clothing (PPC) to ensure the safety and performance of the workers. This paper describes the accomplishment of a user-centred design process of new PPC for offshore workers operating in this area. The user-centred design process was accomplished by mixed-methods. Insights into user needs and context of use were established by group interviews and on-the-job observations during a field-trip. The design was developed based on these insights, and refined by user feedback and participatory design. The new PPC was evaluated via field-tests and cold climate chamber tests. The insight into user needs and context of use provided useful input to the design process and contributed to tailored solutions. Providing users with clothing prototypes facilitated participatory design and iterations of design refinement. The group interviews following the final field test showed consensus of enhanced user satisfaction compared to PPC in current use. The final cold chamber test indicated that the new PPC provides sufficient thermal protection during the 60 min of simulated work in a wind-chill temperature of -25°C. Accomplishing a user-centred design process contributed to new PPC with enhanced user satisfaction and included relevant functional solutions.

  17. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  18. High frequency dynamic engine simulation. [TF-30 engine

    NASA Technical Reports Server (NTRS)

    Schuerman, J. A.; Fischer, K. E.; Mclaughlin, P. W.

    1977-01-01

    A digital computer simulation of a mixed flow, twin spool turbofan engine was assembled to evaluate and improve the dynamic characteristics of the engine simulation to disturbance frequencies of at least 100 Hz. One dimensional forms of the dynamic mass, momentum and energy equations were used to model the engine. A TF30 engine was simulated so that dynamic characteristics could be evaluated against results obtained from testing of the TF30 engine at the NASA Lewis Research Center. Dynamic characteristics of the engine simulation were improved by modifying the compression system model. Modifications to the compression system model were established by investigating the influence of size and number of finite dynamic elements. Based on the results of this program, high frequency engine simulations using finite dynamic elements can be assembled so that the engine dynamic configuration is optimum with respect to dynamic characteristics and computer execution time. Resizing of the compression systems finite elements improved the dynamic characteristics of the engine simulation but showed that additional refinements are required to obtain close agreement simulation and actual engine dynamic characteristics.

  19. Preliminary Evaluation of the DUSTRAN Modeling Suite for Modeling Atmospheric Chloride Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Philip; Tran, Tracy; Fritz, Bradley

    2016-05-03

    This study investigates the potential of DUSTRAN, a dust dispersion modeling system developed by Pacific Northwest National Laboratory, to model the transport of sea salt aerosols (SSA). Results from DUSTRAN simulations run with historical meteorological data were compared against privately-measured chloride data at the near coastal Maine Yankee Nuclear Power Plant (NPP) and the Environmental Protection Agency-measured CASTNET data from Acadia National Park (NP). The comparisons have provided both encouragement as to the practical value of DUSTRAN’s CALPUFF model and suggestions for further software development opportunities. All modeled concentrations were within one order of magnitude of those measured and amore » few test cases showed excellent agreement between modeled and measured concentrations. However, there is a lack of consistency in discrepancy which may be due to inaccurate extrapolation of meteorological data, underlying model physics, and the source term. Future research will refine the software to better capture physical phenomena. Overall, results indicate that with parameter refinement, DUSTRAN has the potential to simulate atmospheric chloride transport from known sources to inland sites for the purpose of determining the corrosion susceptibility of various structures, systems, and components at the site.« less

  20. Wind Farm LES Simulations Using an Overset Methodology

    NASA Astrophysics Data System (ADS)

    Ananthan, Shreyas; Yellapantula, Shashank

    2017-11-01

    Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.

  1. Simulation of Pressure-swing Distillation for Separation of Ethyl Acetate-Ethanol-Water

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Zhou, Menglin; Wang, Yujie; Zhang, Xi; Wu, Gang

    2017-12-01

    In the light of the azeotrope of ethyl acetate-ethanol-water, a process of pressure-swing distillation is proposed. The separation process is simulated by Aspen Plus, and the effects of theoretical stage number, reflux ratio and feed stage about the pressure-swing distillation are optimized. Some better process parameters are as follows: for ethyl acetate refining tower, the pressure is 500.0 kPa, theoretical stage number is 16, reflux ratio is 0.6, feed stage is 5; for crude ethanol tower, the pressure is 101.3 kPa, theoretical stage number is 15, reflux ratio is 0.3, feed stage is 4; for ethanol tower, the pressure is 101.3 kPa, theoretical stage number is 25, reflux ratio is 1.2, feed stage is 10. The mass fraction of ethyl acetate in the bottom of the ethyl acetate refining tower reaches 0.9990, the mass fraction of ethanol in the top of the ethanol tower tower reaches 0.9017, the mass fraction of water in the bottom of the ethanol tower tower reaches 0.9622, and there is also no ethyl acetate in the bottom of the ethanol tower. With laboratory tests, experimental results are in good agreement with the simulation results, which indicates that the separation of ethyl acetate ethanol water can be realized by the pressure-swing distillation separation process. Moreover, it has certain practical significance to industrial practice.

  2. The Aircraft Simulation Role in Improving Flight Safety Through Control Room Training

    NASA Technical Reports Server (NTRS)

    Shy, Karla S.; Hageman, Jacob J.; Le, Jeanette H.; Sitz, Joel (Technical Monitor)

    2002-01-01

    NASA Dryden Flight Research Center uses its six-degrees-of-freedom (6-DOF) fixed-base simulations for mission control room training to improve flight safety and operations. This concept is applied to numerous flight projects such as the F-18 High Alpha Research Vehicle (HARV), the F-15 Intelligent Flight Control System (IFCS), the X-38 Actuator Control Test (XACT), and X-43A (Hyper-X). The Dryden 6-DOF simulations are typically used through various stages of a project, from design to ground tests. The roles of these simulations have expanded to support control room training, reinforcing flight safety by building control room staff proficiency. Real-time telemetry, radar, and video data are generated from flight vehicle simulation models. These data are used to drive the control room displays. Nominal static values are used to complete information where appropriate. Audio communication is also an integral part of training sessions. This simulation capability is used to train control room personnel and flight crew for nominal missions and emergency situations. Such training sessions are also opportunities to refine flight cards and control room display pages, exercise emergency procedures, and practice control room setup for the day of flight. This paper describes this technology as it is used in the X-43A and F-15 IFCS and XACT projects.

  3. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2014-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  4. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2013-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  5. Evaluation of model predictions of the ecological effects of 4-nonylphenol -- before and after model refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanratty, M.P.; Liber, K.

    1994-12-31

    The Littoral Ecosystem Risk Assessment Model (LERAM) is a bioenergetic ecosystem effects model. It links single species toxicity data to a bioenergetic model of the trophic structure of an ecosystem in order to simulate community and ecosystem level effects of chemical stressors. LERAM was used in 1992 to simulate the ecological effects of diflubenzuron. When compared to the results from a littoral enclosure study, the model exaggerated the cascading of effects through the trophic levels of the littoral ecosystem. It was hypothesized that this could be corrected by making minor changes in the representation of the littoral food web. Twomore » refinements of the model were therefore performed: (1) the plankton and macroinvertebrate model populations [eg., predatory Copepoda, herbivorous Insecta, green phytoplankton, etc.] were changed to better represent the habitat and feeding preferences of the endemic taxa; and (2) the method for modeling the microbial degradation of detritus (and the resulting nutrient remineralization) was changed from simulating bacterial populations to simulating bacterial function. Model predictions of the ecological effects of 4-nonylphenol were made before and after these refinements. Both sets of predictions were then compared to the results from a littoral enclosure study of the ecological effects of 4-nonylphenol. The changes in the LERAM predictions were then used to determine the success of the refinements, to guide. future research, and to further define LERAM`s domain of application.« less

  6. Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo

    2015-01-01

    Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.

  7. Experimental determination of heat transfer coefficients in roll bite and air cooling for computer simulations of 1100 MPa carbon steel rolling

    NASA Astrophysics Data System (ADS)

    Leinonen, Olli; Ilmola, Joonas; Seppälä, Oskari; Pohjonen, Aarne; Paavola, Jussi; Koskenniska, Sami; Larkiola, Jari

    2018-05-01

    In modeling of hot rolling pass schedules the heat transfer phenomena have to be known. Radiation to ambient, between rolls and a steel slab as well as heat transfer in contacts must be considered to achieve accurate temperature distribution and thereby accurate material behavior in simulations. Additional heat is generated by friction between the slab and the work roll and by plastic deformation. These phenomena must be taken into account when the effective heat transfer coefficient is determined from experimental data. In this paper we determine the effective heat transfer coefficient at the contact interface and emissivity factor of slab surface for 1100MPa strength carbon steel for hot rolling simulations. Experimental pilot rolling test were carried out and slab temperatures gathered right below the interface and at the mid thickness of the slab. Emissivity factor tests were carried out in the same manner but without rolling. Experimental data is utilized to derive contact heat transfer coefficient at the interface and emissivity factor of slab surface. Pilot rolling test is reproduced in FE-analysis to further refine the heat transfer coefficient and emissivity factor. Material mechanical properties at rolling temperatures were determined by Gleeble™ thermo-mechanical simulator and IDS thermodynamic-kinetic-empirical software.

  8. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  9. Using Modeling and Simulation to Complement Testing for Increased Understanding of Weapon Subassembly Response.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Michael K.; Davidson, Megan

    As part of Sandia’s nuclear deterrence mission, the B61-12 Life Extension Program (LEP) aims to modernize the aging weapon system. Modernization requires requalification and Sandia is using high performance computing to perform advanced computational simulations to better understand, evaluate, and verify weapon system performance in conjunction with limited physical testing. The Nose Bomb Subassembly (NBSA) of the B61-12 is responsible for producing a fuzing signal upon ground impact. The fuzing signal is dependent upon electromechanical impact sensors producing valid electrical fuzing signals at impact. Computer generated models were used to assess the timing between the impact sensor’s response to themore » deceleration of impact and damage to major components and system subassemblies. The modeling and simulation team worked alongside the physical test team to design a large-scale reverse ballistic test to not only assess system performance, but to also validate their computational models. The reverse ballistic test conducted at Sandia’s sled test facility sent a rocket sled with a representative target into a stationary B61-12 (NBSA) to characterize the nose crush and functional response of NBSA components. Data obtained from data recorders and high-speed photometrics were integrated with previously generated computer models in order to refine and validate the model’s ability to reliably simulate real-world effects. Large-scale tests are impractical to conduct for every single impact scenario. By creating reliable computer models, we can perform simulations that identify trends and produce estimates of outcomes over the entire range of required impact conditions. Sandia’s HPCs enable geometric resolution that was unachievable before, allowing for more fidelity and detail, and creating simulations that can provide insight to support evaluation of requirements and performance margins. As computing resources continue to improve, researchers at Sandia are hoping to improve these simulations so they provide increasingly credible analysis of the system response and performance over the full range of conditions.« less

  10. Aerodynamic Characteristics of a Refined Deep-step Planing-tail Flying-boat Hull with Various Forebody and Afterbody Shapes

    NASA Technical Reports Server (NTRS)

    Riebe, John M; Naeseth, Rodger L

    1952-01-01

    An investigation was made in the Langley 300-mph 7- by 10-foot tunnel to determine the aerodynamic characteristics of a refined deep-step planing-tail hull with various forebody and afterbody shapes and, for comparison, a streamline body simulating the fuselage of a modern transport airplane. The results of the tests indicated that the configurations incorporating a forebody with a length-beam ratio of 7 had lower minimum drag coefficients than the configurations incorporating a forebody with length-beam ratio of 5. The lowest minimum drag coefficients, which were considerably less than that of a conventional hull and slightly less than that of a streamline body, were obtained on the length-beam-ratio-7 forebody, alone and with round center boom. Drag coefficients and longitudinal- and lateral-stability parameters presented include the interference of a 21-percent-thick support wing.

  11. Determination of component volumes of lipid bilayers from simulations.

    PubMed Central

    Petrache, H I; Feller, S E; Nagle, J F

    1997-01-01

    An efficient method for extracting volumetric data from simulations is developed. The method is illustrated using a recent atomic-level molecular dynamics simulation of L alpha phase 1,2-dipalmitoyl-sn-glycero-3-phosphocholine bilayer. Results from this simulation are obtained for the volumes of water (VW), lipid (V1), chain methylenes (V2), chain terminal methyls (V3), and lipid headgroups (VH), including separate volumes for carboxyl (Vcoo), glyceryl (Vgl), phosphoryl (VPO4), and choline (Vchol) groups. The method assumes only that each group has the same average volume regardless of its location in the bilayer, and this assumption is then tested with the current simulation. The volumes obtained agree well with the values VW and VL that have been obtained directly from experiment, as well as with the volumes VH, V2, and V3 that require certain assumptions in addition to the experimental data. This method should help to support and refine some assumptions that are necessary when interpreting experimental data. Images FIGURE 4 PMID:9129826

  12. Development of the simulation system {open_quotes}IMPACT{close_quotes} for analysis of nuclear power plant severe accidents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi

    1997-07-01

    The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system {open_quotes}IMPACT{close_quotes} for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT`s distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed bymore » three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data.« less

  13. Library reuse in a rapid development environment

    NASA Technical Reports Server (NTRS)

    Uhde, JO; Weed, Daniel; Gottlieb, Robert; Neal, Douglas

    1995-01-01

    The Aeroscience and Flight Mechanics Division (AFMD) established a Rapid Development Laboratory (RDL) to investigate and improve new 'rapid development' software production processes and refine the use of commercial, off-the-shelf (COTS) tools. These tools and processes take an avionics design project from initial inception through high fidelity, real-time, hardware-in-the-loop (HIL) testing. One central theme of a rapid development process is the use and integration of a variety of COTS tools: This paper discusses the RDL MATRIX(sub x)(R) libraries, as well as the techniques for managing and documenting these libraries. This paper also shows the methods used for building simulations with the Advanced Simulation Development System (ASDS) libraries, and provides metrics to illustrate the amount of reuse for five complete simulations. Combining ASDS libraries with MATRIX(sub x)(R) libraries is discussed.

  14. Comparison of Two Grid Refinement Approaches for High Resolution Regional Climate Modeling: MPAS vs WRF

    NASA Astrophysics Data System (ADS)

    Leung, L.; Hagos, S. M.; Rauscher, S.; Ringler, T.

    2012-12-01

    This study compares two grid refinement approaches using global variable resolution model and nesting for high-resolution regional climate modeling. The global variable resolution model, Model for Prediction Across Scales (MPAS), and the limited area model, Weather Research and Forecasting (WRF) model, are compared in an idealized aqua-planet context with a focus on the spatial and temporal characteristics of tropical precipitation simulated by the models using the same physics package from the Community Atmosphere Model (CAM4). For MPAS, simulations have been performed with a quasi-uniform resolution global domain at coarse (1 degree) and high (0.25 degree) resolution, and a variable resolution domain with a high-resolution region at 0.25 degree configured inside a coarse resolution global domain at 1 degree resolution. Similarly, WRF has been configured to run on a coarse (1 degree) and high (0.25 degree) resolution tropical channel domain as well as a nested domain with a high-resolution region at 0.25 degree nested two-way inside the coarse resolution (1 degree) tropical channel. The variable resolution or nested simulations are compared against the high-resolution simulations that serve as virtual reality. Both MPAS and WRF simulate 20-day Kelvin waves propagating through the high-resolution domains fairly unaffected by the change in resolution. In addition, both models respond to increased resolution with enhanced precipitation. Grid refinement induces zonal asymmetry in precipitation (heating), accompanied by zonal anomalous Walker like circulations and standing Rossby wave signals. However, there are important differences between the anomalous patterns in MPAS and WRF due to differences in the grid refinement approaches and sensitivity of model physics to grid resolution. This study highlights the need for "scale aware" parameterizations in variable resolution and nested regional models.

  15. Obstetric team simulation program challenges.

    PubMed

    Bullough, A S; Wagner, S; Boland, T; Waters, T P; Kim, K; Adams, W

    2016-12-01

    To describe the challenges associated with the development and assessment of an obstetric emergency team simulation program. The goal was to develop a hybrid, in-situ and high fidelity obstetric emergency team simulation program that incorporated weekly simulation sessions on the labor and delivery unit, and quarterly, education protected sessions in the simulation center. All simulation sessions were video-recorded and reviewed. Labor and delivery unit and simulation center. Medical staff covering labor and delivery, anesthesiology and obstetric residents and obstetric nurses. Assessments included an on-line knowledge multiple-choice questionnaire about the simulation scenarios. This was completed prior to the initial in-situ simulation session and repeated 3 months later, the Clinical Teamwork Scale with inter-rater reliability, participant confidence surveys and subjective participant satisfaction. A web-based curriculum comprising modules on communication skills, team challenges, and team obstetric emergency scenarios was also developed. Over 4 months, only 6 labor and delivery unit in-situ sessions out of a possible 14 sessions were carried out. Four high-fidelity sessions were performed in 2 quarterly education protected meetings in the simulation center. Information technology difficulties led to the completion of only 18 pre/post web-based multiple-choice questionnaires. These test results showed no significant improvement in raw score performance from pre-test to post-test (P=.27). During Clinical Teamwork Scale live and video assessment, trained raters and program faculty were in agreement only 31% and 28% of the time, respectively (Kendall's W=.31, P<.001 and W=.28, P<.001). Participant confidence surveys overall revealed confidence significantly increased (P<.05), from pre-scenario briefing to after post-scenario debriefing. Program feedback indicates a high level of participant satisfaction and improved confidence yet further program refinement is required. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. 40 CFR 80.1630 - Sampling and testing requirements for refiners, gasoline importers and producers and importers of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... refiners, gasoline importers and producers and importers of certified ethanol denaturant. 80.1630 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur § 80.1630 Sampling and testing requirements for refiners, gasoline importers and producers and importers of certified ethanol denaturant. (a) Sample and...

  17. Unstructured Cartesian refinement with sharp interface immersed boundary method for 3D unsteady incompressible flows

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Chawdhary, Saurabh; Sotiropoulos, Fotis

    2016-11-01

    A novel numerical method is developed for solving the 3D, unsteady, incompressible Navier-Stokes equations on locally refined fully unstructured Cartesian grids in domains with arbitrarily complex immersed boundaries. Owing to the utilization of the fractional step method on an unstructured Cartesian hybrid staggered/non-staggered grid layout, flux mismatch and pressure discontinuity issues are avoided and the divergence free constraint is inherently satisfied to machine zero. Auxiliary/hanging nodes are used to facilitate the discretization of the governing equations. The second-order accuracy of the solver is ensured by using multi-dimension Lagrange interpolation operators and appropriate differencing schemes at the interface of regions with different levels of refinement. The sharp interface immersed boundary method is augmented with local near-boundary refinement to handle arbitrarily complex boundaries. The discrete momentum equation is solved with the matrix free Newton-Krylov method and the Krylov-subspace method is employed to solve the Poisson equation. The second-order accuracy of the proposed method on unstructured Cartesian grids is demonstrated by solving the Poisson equation with a known analytical solution. A number of three-dimensional laminar flow simulations of increasing complexity illustrate the ability of the method to handle flows across a range of Reynolds numbers and flow regimes. Laminar steady and unsteady flows past a sphere and the oblique vortex shedding from a circular cylinder mounted between two end walls demonstrate the accuracy, the efficiency and the smooth transition of scales and coherent structures across refinement levels. Large-eddy simulation (LES) past a miniature wind turbine rotor, parameterized using the actuator line approach, indicates the ability of the fully unstructured solver to simulate complex turbulent flows. Finally, a geometry resolving LES of turbulent flow past a complete hydrokinetic turbine illustrates the potential of the method to simulate turbulent flows past geometrically complex bodies on locally refined meshes. In all the cases, the results are found to be in very good agreement with published data and savings in computational resources are achieved.

  18. The use of simulation and multiple environmental tracers to quantify groundwater flow in a shallow aquifer

    USGS Publications Warehouse

    Reilly, Thomas E.; Plummer, Niel; Phillips, Patrick J.; Busenberg, Eurybiades

    1994-01-01

    Measurements of the concentrations of chlorofluorocarbons (CFCs), tritium, and other environmental tracers can be used to calculate recharge ages of shallow groundwater and estimate rates of groundwater movement. Numerical simulation also provides quantitative estimates of flow rates, flow paths, and mixing properties of the groundwater system. The environmental tracer techniques and the hydraulic analyses each contribute to the understanding and quantification of the flow of shallow groundwater. However, when combined, the two methods provide feedback that improves the quantification of the flow system and provides insight into the processes that are the most uncertain. A case study near Locust Grove, Maryland, is used to investigate the utility of combining groundwater age dating, based on CFCs and tritium, and hydraulic analyses using numerical simulation techniques. The results of the feedback between an advective transport model and the estimates of groundwater ages determined by the CFCs improve a quantitative description of the system by refining the system conceptualization and estimating system parameters. The plausible system developed with this feedback between the advective flow model and the CFC ages is further tested using a solute transport simulation to reproduce the observed tritium distribution in the groundwater. The solute transport simulation corroborates the plausible system developed and also indicates that, for the system under investigation with the data obtained from 0.9-m-long (3-foot-long) well screens, the hydrodynamic dispersion is negligible. Together the two methods enable a coherent explanation of the flow paths and rates of movement while indicating weaknesses in the understanding of the system that will require future data collection and conceptual refinement of the groundwater system.

  19. Resolving the Small-Scale Structure of the Circumgalactic Medium in Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren

    2017-08-01

    We propose to resolve the circumgalactic medium (CGM) of L* galaxies down to 100 Msun (250 pc) in a full cosmological simulation to examine how mixing and cooling shape the physical nature of this gas on the scales expected from observations. COS has provided the best characterization of the low-z CGM to date, revealing the extent and amount of low- and high-ions and hinting at the kinematic relations between them. Yet cosmological galaxy simulations that can reproduce the stellar properties of galaxies have all struggled to reproduce these results even qualitatively. However, while the COS data imply that the low-ion absorption is occurring on sub-kpc scales, such scales can not be traced by simulations with resolution between 1-5 kpc in the CGM. Our proposed simulations will, for the first time, reach the resolution required to resolve these structures in the outer halo of L* galaxies. Using the adaptive mesh refinement code enzo, we will experiment with the size, shape, and resolution of an enforced high refinement region extending from the disk into the CGM to identify the best configuration for probing the flows of gas throughout the CGM. Our test case has found that increasing the resolution alone can have dramatic consequences for the density, temperature, and kinematics along a line of sight. Coupling this technique with an independent feedback study already underway will help disentangle the roles of global and small scale physics in setting the physical state of the CGM. Finally, we will use the MISTY pipeline to generate realistic mock spectra for direct comparison with COS data which will be made available through MAST.

  20. Exploring the impacts of physics and resolution on aqua-planet simulations from a nonhydrostatic global variable-resolution modeling framework: IMPACTS OF PHYSICS AND RESOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Chun; Leung, L. Ruby; Park, Sang-Hun

    Advances in computing resources are gradually moving regional and global numerical forecasting simulations towards sub-10 km resolution, but global high resolution climate simulations remain a challenge. The non-hydrostatic Model for Prediction Across Scales (MPAS) provides a global framework to achieve very high resolution using regional mesh refinement. Previous studies using the hydrostatic version of MPAS (H-MPAS) with the physics parameterizations of Community Atmosphere Model version 4 (CAM4) found notable resolution dependent behaviors. This study revisits the resolution sensitivity using the non-hydrostatic version of MPAS (NH-MPAS) with both CAM4 and CAM5 physics. A series of aqua-planet simulations at global quasi-uniform resolutionsmore » ranging from 240 km to 30 km and global variable resolution simulations with a regional mesh refinement of 30 km resolution over the tropics are analyzed, with a primary focus on the distinct characteristics of NH-MPAS in simulating precipitation, clouds, and large-scale circulation features compared to H-MPAS-CAM4. The resolution sensitivity of total precipitation and column integrated moisture in NH-MPAS is smaller than that in H-MPAS-CAM4. This contributes importantly to the reduced resolution sensitivity of large-scale circulation features such as the inter-tropical convergence zone and Hadley circulation in NH-MPAS compared to H-MPAS. In addition, NH-MPAS shows almost no resolution sensitivity in the simulated westerly jet, in contrast to the obvious poleward shift in H-MPAS with increasing resolution, which is partly explained by differences in the hyperdiffusion coefficients used in the two models that influence wave activity. With the reduced resolution sensitivity, simulations in the refined region of the NH-MPAS global variable resolution configuration exhibit zonally symmetric features that are more comparable to the quasi-uniform high-resolution simulations than those from H-MPAS that displays zonal asymmetry in simulations inside the refined region. Overall, NH-MPAS with CAM5 physics shows less resolution sensitivity compared to CAM4. These results provide a reference for future studies to further explore the use of NH-MPAS for high-resolution climate simulations in idealized and realistic configurations.« less

  1. Mesh refinement in a two-dimensional large eddy simulation of a forced shear layer

    NASA Technical Reports Server (NTRS)

    Claus, R. W.; Huang, P. G.; Macinnes, J. M.

    1989-01-01

    A series of large eddy simulations are made of a forced shear layer and compared with experimental data. Several mesh densities were examined to separate the effect of numerical inaccuracy from modeling deficiencies. The turbulence model that was used to represent small scale, 3-D motions correctly predicted some gross features of the flow field, but appears to be structurally incorrect. The main effect of mesh refinement was to act as a filter on the scale of vortices that developed from the inflow boundary conditions.

  2. Gordon Fullerton in PCA (MD-11) Simulator

    NASA Technical Reports Server (NTRS)

    1998-01-01

    NASA research pilot Gordon Fullerton 'flying' in the MD-11 simulator during the Propulsion Controlled Aircraft (PCA) project. This investigation grew out of the crash of a DC-10 airliner on July 19, 1989, following an explosion in the rear engine which caused the loss of all manual flight controls. The flight crew attempted to control the airliner using only the thrust from the two remaining engines. Although the DC-10 crashed during the landing attempt, 184 of the 296 passengers and crew aboard survived. The PCA effort at the Dryden Flight Research Center grew out of the crash, and attempted to develop a means to successfully land an aircraft using only engine thrust. After more than five years of work, on August 29, 1995, Gordon Fullerton made the first PCA touchdown aboard an MD-11 airliner (a later version of the DC-10). The concept was further refined over the years that followed this first landing. Simulators were essential ingredients of the PCA development process. The feasibility of the concept was first tested with an F-15 simulator, then the results of actual flight tests in an F-15 were incorporated back into the simulator. Additional simulations were run on the Boeing 720 airliner simulator used in the Controlled Impact Demonstration project. After the MD-11 test landings, Boeing 747 and 757 simulators tested a wide range of possible situations. Simulations even helped develop a method of landing an airliner if it lost its complete hydraulic system as well as a wing engine, by transferring fuel to shift the center of gravity toward the working engine. The most extreme procedure was undertaken in a 747 simulator. The aircraft simulated the loss of the hydraulic system at 35,000 feet and rolled upside down. Then, the PCA mode was engaged, the airliner righted itself, leveled its wings, and made an approach nearly identical to that of a normal auto landing.

  3. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  4. Scramjet test flow reconstruction for a large-scale expansion tube, Part 2: axisymmetric CFD analysis

    NASA Astrophysics Data System (ADS)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2018-07-01

    This paper presents the second part of a study aiming to accurately characterise a Mach 10 scramjet test flow generated using a large free-piston-driven expansion tube. Part 1 described the experimental set-up, the quasi-one-dimensional simulation of the full facility, and the hybrid analysis technique used to compute the nozzle exit test flow properties. The second stage of the hybrid analysis applies the computed 1-D shock tube flow history as an inflow to a high-fidelity two-dimensional-axisymmetric analysis of the acceleration tube. The acceleration tube exit flow history is then applied as an inflow to a further refined axisymmetric nozzle model, providing the final nozzle exit test flow properties and thereby completing the analysis. This paper presents the results of the axisymmetric analyses. These simulations are shown to closely reproduce experimentally measured shock speeds and acceleration tube static pressure histories, as well as nozzle centreline static and impact pressure histories. The hybrid scheme less successfully predicts the diameter of the core test flow; however, this property is readily measured through experimental pitot surveys. In combination, the full test flow history can be accurately determined.

  5. Scramjet test flow reconstruction for a large-scale expansion tube, Part 2: axisymmetric CFD analysis

    NASA Astrophysics Data System (ADS)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    This paper presents the second part of a study aiming to accurately characterise a Mach 10 scramjet test flow generated using a large free-piston-driven expansion tube. Part 1 described the experimental set-up, the quasi-one-dimensional simulation of the full facility, and the hybrid analysis technique used to compute the nozzle exit test flow properties. The second stage of the hybrid analysis applies the computed 1-D shock tube flow history as an inflow to a high-fidelity two-dimensional-axisymmetric analysis of the acceleration tube. The acceleration tube exit flow history is then applied as an inflow to a further refined axisymmetric nozzle model, providing the final nozzle exit test flow properties and thereby completing the analysis. This paper presents the results of the axisymmetric analyses. These simulations are shown to closely reproduce experimentally measured shock speeds and acceleration tube static pressure histories, as well as nozzle centreline static and impact pressure histories. The hybrid scheme less successfully predicts the diameter of the core test flow; however, this property is readily measured through experimental pitot surveys. In combination, the full test flow history can be accurately determined.

  6. The Rene 150 directionally solidified superalloy turbine blades, volume 1

    NASA Technical Reports Server (NTRS)

    Deboer, G. J.

    1981-01-01

    Turbine blade design and analysis, preliminary Rene 150 system refinement, coating adaptation and evaluation, final Rene 150 system refinement, component-test blade production and evaluation, engine-test blade production, and engine test are discussed.

  7. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  8. VMS-ROT: A New Module of the Virtual Multifrequency Spectrometer for Simulation, Interpretation, and Fitting of Rotational Spectra

    PubMed Central

    2017-01-01

    The Virtual Multifrequency Spectrometer (VMS) is a tool that aims at integrating a wide range of computational and experimental spectroscopic techniques with the final goal of disclosing the static and dynamic physical–chemical properties “hidden” in molecular spectra. VMS is composed of two parts, namely, VMS-Comp, which provides access to the latest developments in the field of computational spectroscopy, and VMS-Draw, which provides a powerful graphical user interface (GUI) for an intuitive interpretation of theoretical outcomes and a direct comparison to experiment. In the present work, we introduce VMS-ROT, a new module of VMS that has been specifically designed to deal with rotational spectroscopy. This module offers an integrated environment for the analysis of rotational spectra: from the assignment of spectral transitions to the refinement of spectroscopic parameters and the simulation of the spectrum. While bridging theoretical and experimental rotational spectroscopy, VMS-ROT is strongly integrated with quantum-chemical calculations, and it is composed of four independent, yet interacting units: (1) the computational engine for the calculation of the spectroscopic parameters that are employed as a starting point for guiding experiments and for the spectral interpretation, (2) the fitting-prediction engine for the refinement of the molecular parameters on the basis of the assigned transitions and the prediction of the rotational spectrum of the target molecule, (3) the GUI module that offers a powerful set of tools for a vis-à-vis comparison between experimental and simulated spectra, and (4) the new assignment tool for the assignment of experimental transitions in terms of quantum numbers upon comparison with the simulated ones. The implementation and the main features of VMS-ROT are presented, and the software is validated by means of selected test cases ranging from isolated molecules of different sizes to molecular complexes. VMS-ROT therefore offers an integrated environment for the analysis of the rotational spectra, with the innovative perspective of an intimate connection to quantum-chemical calculations that can be exploited at different levels of refinement, as an invaluable support and complement for experimental studies. PMID:28742339

  9. A dynamic structural model of expanded RNA CAG repeats: A refined X-ray structure and computational investigations using molecular dynamics and umbrella sampling simulations

    PubMed Central

    Yildirim, Ilyas; Park, Hajeung; Disney, Matthew D.; Schatz, George C.

    2013-01-01

    One class of functionally important RNA is repeating transcripts that cause disease through various mechanisms. For example, expanded r(CAG) repeats can cause Huntington’s and other disease through translation of toxic proteins. Herein, crystal structure of r[5ʹUUGGGC(CAG)3GUCC]2, a model of CAG expanded transcripts, refined to 1.65 Å resolution is disclosed that show both anti-anti and syn-anti orientations for 1×1 nucleotide AA internal loops. Molecular dynamics (MD) simulations using Amber force field in explicit solvent were run for over 500 ns on model systems r(5ʹGCGCAGCGC)2 (MS1) and r(5ʹCCGCAGCGG)2 (MS2). In these MD simulations, both anti-anti and syn-anti AA base pairs appear to be stable. While anti-anti AA base pairs were dynamic and sampled multiple anti-anti conformations, no syn-anti↔anti-anti transformations were observed. Umbrella sampling simulations were run on MS2, and a 2D free energy surface was created to extract transformation pathways. In addition, over 800 ns explicit solvent MD simulation was run on r[5ʹGGGC(CAG)3GUCC]2, which closely represents the refined crystal structure. One of the terminal AA base pairs (syn-anti conformation), transformed to anti-anti conformation. The pathway followed in this transformation was the one predicted by umbrella sampling simulations. Further analysis showed a binding pocket near AA base pairs in syn-anti conformations. Computational results combined with the refined crystal structure show that global minimum conformation of 1×1 nucleotide AA internal loops in r(CAG) repeats is anti-anti but can adopt syn-anti depending on the environment. These results are important to understand RNA dynamic-function relationships and develop small molecules that target RNA dynamic ensembles. PMID:23441937

  10. Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction

    NASA Astrophysics Data System (ADS)

    Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.

    2013-12-01

    We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.

  11. Structure Damage Simulations Accounting for Inertial Effects and Impact and Optimization of Grid-Stiffened Non-Circular Shells

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Jaunky, Navin

    1999-01-01

    The goal of this research project is to develop modelling and analysis strategy for the penetration of aluminium plates impacted by titanium impactors. Finite element analysis is used to study the penetration of aluminium plates impacted by titanium impactors in order to study the effect of such uncontained engine debris impacts on aircraft-like skin panels. LS-DYNA3D) is used in the simulations to model the impactor, test fixture frame and target barrier plate. The effects of mesh refinement, contact modeling, and impactor initial velocity and orientation were studied. The research project also includes development of a design tool for optimum design of grid-stiffened non-circular shells or panels subjected to buckling.

  12. Energy Systems Test Area (ESTA) Pyrotechnic Operations: User Test Planning Guide

    NASA Technical Reports Server (NTRS)

    Hacker, Scott

    2012-01-01

    The Johnson Space Center (JSC) has created and refined innovative analysis, design, development, and testing techniques that have been demonstrated in all phases of spaceflight. JSC is uniquely positioned to apply this expertise to components, systems, and vehicles that operate in remote or harsh environments. We offer a highly skilled workforce, unique facilities, flexible project management, and a proven management system. The purpose of this guide is to acquaint Test Requesters with the requirements for test, analysis, or simulation services at JSC. The guide includes facility services and capabilities, inputs required by the facility, major milestones, a roadmap of the facility s process, and roles and responsibilities of the facility and the requester. Samples of deliverables, facility interfaces, and inputs necessary to define the cost and schedule are included as appendices to the guide.

  13. A model for recovery of scrap monolithic uranium molybdenum fuel by electrorefining

    NASA Astrophysics Data System (ADS)

    Van Kleeck, Melissa A.

    The goal of the Reduced Enrichment for Research and Test Reactors program (RERTR) is toreduce enrichment at research and test reactors, thereby decreasing proliferation risk at these facilities. A new fuel to accomplish this goal is being manufactured experimentally at the Y12 National Security Complex. This new fuel will require its own waste management procedure,namely for the recovery of scrap from its manufacture. The new fuel is a monolithic uraniummolybdenum alloy clad in zirconium. Feasibility tests were conducted in the Planar Electrode Electrorefiner using scrap U-8Mo fuel alloy. These tests proved that a uranium product could be recovered free of molybdenum from this scrap fuel by electrorefining. Tests were also conducted using U-10Mo Zr clad fuel, which confirmed that product could be recovered from a clad version of this scrap fuel at an engineering scale, though analytical results are pending for the behavior of Zr in the electrorefiner. A model was constructed for the simulation of electrorefining the scrap material produced in the manufacture of this fuel. The model was implemented on two platforms, Microsoft Excel and MatLab. Correlations, used in the model, were developed experimentally, describing area specific resistance behavior at each electrode. Experiments validating the model were conducted using scrap of U-10Mo Zr clad fuel in the Planar Electrode Electrorefiner. The results of model simulations on both platforms were compared to experimental results for the same fuel, salt and electrorefiner compositions and dimensions for two trials. In general, the model demonstrated behavior similar to experimental data but additional refinements are needed to improve its accuracy. These refinements consist of a function for surface area at anode and cathode based on charge passed. Several approximations were made in the model concerning areas of electrodes which should be replaced by a more accurate function describing these areas.

  14. Simulation of an Isolated Tiltrotor in Hover with an Unstructured Overset-Grid RANS Solver

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, Elizabeth M.; Biedron, Robert T.

    2009-01-01

    An unstructured overset-grid Reynolds Averaged Navier-Stokes (RANS) solver, FUN3D, is used to simulate an isolated tiltrotor in hover. An overview of the computational method is presented as well as the details of the overset-grid systems. Steady-state computations within a noninertial reference frame define the performance trends of the rotor across a range of the experimental collective settings. Results are presented to show the effects of off-body grid refinement and blade grid refinement. The computed performance and blade loading trends show good agreement with experimental results and previously published structured overset-grid computations. Off-body flow features indicate a significant improvement in the resolution of the first perpendicular blade vortex interaction with background grid refinement across the collective range. Considering experimental data uncertainty and effects of transition, the prediction of figure of merit on the baseline and refined grid is reasonable at the higher collective range- within 3 percent of the measured values. At the lower collective settings, the computed figure of merit is approximately 6 percent lower than the experimental data. A comparison of steady and unsteady results show that with temporal refinement, the dynamic results closely match the steady-state noninertial results which gives confidence in the accuracy of the dynamic overset-grid approach.

  15. Effects of operator splitting and low Mach-number correction in turbulent mixing transition simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grinstein, F. F.; Saenz, J. A.; Dolence, J. C.

    Inmore » this paper, transition and turbulence decay with the Taylor–Green vortex have been effectively used to demonstrate emulation of high Reynolds-number ( R e ) physical dissipation through numerical convective effects of various non-oscillatory finite-volume algorithms for implicit large eddy simulation (ILES), e.g. using the Godunov-based Eulerian adaptive mesh refinement code xRAGE. The inverse-chevron shock tube experiment simulations have been also used to assess xRAGE based ILES for shock driven turbulent mixing, compared with available simulation and laboratory data. The previous assessments are extended to evaluate new directionally-unsplit high-order algorithms in xRAGE, including a correction to address the well-known issue of excessive numerical diffusion of shock-capturing (e.g., Godunov-type) schemes for low Mach numbers. The unsplit options for hydrodynamics in xRAGE are discussed in detail, followed by fundamental tests with representative shock problems. Basic issues of transition to turbulence and turbulent mixing are discussed, and results of simulations of high- R e turbulent flow and mixing in canonical test cases are reported. Finally, compared to the directional-split cases, and for each grid resolution considered, unsplit results exhibit transition to turbulence with much higher effective R e —and significantly more so with the low Mach number correction.« less

  16. Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case

    NASA Astrophysics Data System (ADS)

    Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.

    2016-11-01

    This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.

  17. Effects of operator splitting and low Mach-number correction in turbulent mixing transition simulations

    DOE PAGES

    Grinstein, F. F.; Saenz, J. A.; Dolence, J. C.; ...

    2018-06-07

    Inmore » this paper, transition and turbulence decay with the Taylor–Green vortex have been effectively used to demonstrate emulation of high Reynolds-number ( R e ) physical dissipation through numerical convective effects of various non-oscillatory finite-volume algorithms for implicit large eddy simulation (ILES), e.g. using the Godunov-based Eulerian adaptive mesh refinement code xRAGE. The inverse-chevron shock tube experiment simulations have been also used to assess xRAGE based ILES for shock driven turbulent mixing, compared with available simulation and laboratory data. The previous assessments are extended to evaluate new directionally-unsplit high-order algorithms in xRAGE, including a correction to address the well-known issue of excessive numerical diffusion of shock-capturing (e.g., Godunov-type) schemes for low Mach numbers. The unsplit options for hydrodynamics in xRAGE are discussed in detail, followed by fundamental tests with representative shock problems. Basic issues of transition to turbulence and turbulent mixing are discussed, and results of simulations of high- R e turbulent flow and mixing in canonical test cases are reported. Finally, compared to the directional-split cases, and for each grid resolution considered, unsplit results exhibit transition to turbulence with much higher effective R e —and significantly more so with the low Mach number correction.« less

  18. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  19. Refining Pragmatically-Appropriate Oral Communication via Computer-Simulated Conversations

    ERIC Educational Resources Information Center

    Sydorenko, Tetyana; Daurio, Phoebe; Thorne, Steven L.

    2018-01-01

    To address the problem of limited opportunities for practicing second language speaking in interaction, especially delicate interactions requiring pragmatic competence, we describe computer simulations designed for the oral practice of extended pragmatic routines and report on the affordances of such simulations for learning pragmatically…

  20. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  1. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  2. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  3. Strengthening and Improving Yield Asymmetry of Magnesium Alloys by Second Phase Particle Refinement Under the Guidance of Integrated Computational Materials Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dongsheng; Lavender, Curt

    2015-05-08

    Improving yield strength and asymmetry is critical to expand applications of magnesium alloys in industry for higher fuel efficiency and lower CO 2 production. Grain refinement is an efficient method for strengthening low symmetry magnesium alloys, achievable by precipitate refinement. This study provides guidance on how precipitate engineering will improve mechanical properties through grain refinement. Precipitate refinement for improving yield strengths and asymmetry is simulated quantitatively by coupling a stochastic second phase grain refinement model and a modified polycrystalline crystal viscoplasticity φ-model. Using the stochastic second phase grain refinement model, grain size is quantitatively determined from the precipitate size andmore » volume fraction. Yield strengths, yield asymmetry, and deformation behavior are calculated from the modified φ-model. If the precipitate shape and size remain constant, grain size decreases with increasing precipitate volume fraction. If the precipitate volume fraction is kept constant, grain size decreases with decreasing precipitate size during precipitate refinement. Yield strengths increase and asymmetry approves to one with decreasing grain size, contributed by increasing precipitate volume fraction or decreasing precipitate size.« less

  4. Planned updates and refinements to the central valley hydrologic model, with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.

  5. Planned updates and refinements to the Central Valley hydrologic model with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.

  6. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  7. Polarizable Force Field for DNA Based on the Classical Drude Oscillator: I. Refinement Using Quantum Mechanical Base Stacking and Conformational Energetics.

    PubMed

    Lemkul, Justin A; MacKerell, Alexander D

    2017-05-09

    Empirical force fields seek to relate the configuration of a set of atoms to its energy, thus yielding the forces governing its dynamics, using classical physics rather than more expensive quantum mechanical calculations that are computationally intractable for large systems. Most force fields used to simulate biomolecular systems use fixed atomic partial charges, neglecting the influence of electronic polarization, instead making use of a mean-field approximation that may not be transferable across environments. Recent hardware and software developments make polarizable simulations feasible, and to this end, polarizable force fields represent the next generation of molecular dynamics simulation technology. In this work, we describe the refinement of a polarizable force field for DNA based on the classical Drude oscillator model by targeting quantum mechanical interaction energies and conformational energy profiles of model compounds necessary to build a complete DNA force field. The parametrization strategy employed in the present work seeks to correct weak base stacking in A- and B-DNA and the unwinding of Z-DNA observed in the previous version of the force field, called Drude-2013. Refinement of base nonbonded terms and reparametrization of dihedral terms in the glycosidic linkage, deoxyribofuranose rings, and important backbone torsions resulted in improved agreement with quantum mechanical potential energy surfaces. Notably, we expand on previous efforts by explicitly including Z-DNA conformational energetics in the refinement.

  8. Phase field models for heterogeneous nucleation: Application to inoculation in alpha-solidifying Ti-Al-B alloys

    NASA Astrophysics Data System (ADS)

    Apel, M.; Eiken, J.; Hecht, U.

    2014-02-01

    This paper aims at briefly reviewing phase field models applied to the simulation of heterogeneous nucleation and subsequent growth, with special emphasis on grain refinement by inoculation. The spherical cap and free growth model (e.g. A.L. Greer, et al., Acta Mater. 48, 2823 (2000)) has proven its applicability for different metallic systems, e.g. Al or Mg based alloys, by computing the grain refinement effect achieved by inoculation of the melt with inert seeding particles. However, recent experiments with peritectic Ti-Al-B alloys revealed that the grain refinement by TiB2 is less effective than predicted by the model. Phase field simulations can be applied to validate the approximations of the spherical cap and free growth model, e.g. by computing explicitly the latent heat release associated with different nucleation and growth scenarios. Here, simulation results for point-shaped nucleation, as well as for partially and completely wetted plate-like seed particles will be discussed with respect to recalescence and impact on grain refinement. It will be shown that particularly for large seeding particles (up to 30 μm), the free growth morphology clearly deviates from the assumed spherical cap and the initial growth - until the free growth barrier is reached - significantly contributes to the latent heat release and determines the recalescence temperature.

  9. Molecular dynamics force-field refinement against quasi-elastic neutron scattering data

    DOE PAGES

    Borreguero Calvo, Jose M.; Lynch, Vickie E.

    2015-11-23

    Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less

  10. Study of the adaptive refinement on an open source 2D shallow-water flow solver using quadtree grid for flash flood simulations.

    NASA Astrophysics Data System (ADS)

    Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.

    2015-12-01

    The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/

  11. Usability Testing as a Method to Refine a Health Sciences Library Website.

    PubMed

    Denton, Andrea H; Moody, David A; Bennett, Jason C

    2016-01-01

    User testing, a method of assessing website usability, can be a cost-effective and easily administered process to collect information about a website's effectiveness. A user experience (UX) team at an academic health sciences library has employed user testing for over three years to help refine the library's home page. Test methodology used in-person testers using the "think aloud" method to complete tasks on the home page. Review of test results revealed problem areas of the design and redesign; further testing was effective in refining the page. User testing has proved to be a valuable method to engage users and provide feedback to continually improve the library's home page.

  12. Volumetric formulation for a class of kinetic models with energy conservation.

    PubMed

    Sbragaglia, M; Sugiyama, K

    2010-10-01

    We analyze a volumetric formulation of lattice Boltzmann for compressible thermal fluid flows. The velocity set is chosen with the desired accuracy, based on the Gauss-Hermite quadrature procedure, and tested against controlled problems in bounded and unbounded fluids. The method allows the simulation of thermohydrodyamical problems without the need to preserve the exact space-filling nature of the velocity set, but still ensuring the exact conservation laws for density, momentum, and energy. Issues related to boundary condition problems and improvements based on grid refinement are also investigated.

  13. Manufacturing methods of a composite cell case for a Ni-Cd battery

    NASA Technical Reports Server (NTRS)

    Bauer, J. L.; Bogner, R. S.; Lowe, E. P.; Orlowski, E.

    1979-01-01

    Graphite epoxy material for a nickel cadmium battery cell case has been evaluated and determined to perform in the simulated environment of the battery. The basic manufacturing method requires refinement to demonstrate production feasibility. The various facets of production scale-up, i.e., process and tooling development together with material and process control, have been integrated into a comprehensive manufacturing process that assures production reproducibility and product uniformity. Test results substantiate that a battery cell case produced from graphite epoxy pre-impregnated material utilizing internal pressure bag fabrication method is feasible.

  14. Contributions to HiLiftPW-3 Using Structured, Overset Grid Methods

    NASA Technical Reports Server (NTRS)

    Coder, James G.; Pulliam, Thomas H.; Jensen, James C.

    2018-01-01

    The High-Lift Common Research Model (HL-CRM) and the JAXA Standard Model (JSM) were analyzed computationally using both the OVERFLOW and LAVA codes for the third AIAA High-Lift Prediction Workshop. Geometry descriptions and the test cases simulated are described. With the HL-CRM, the effects of surface smoothness during grid projection and the effect of partially sealing a flap gap were studied. Grid refinement studies were performed at two angles of attack using both codes. For the JSM, simulations were performed with and without the nacelle/pylon. Without the nacelle/pylon, evidence of multiple solutions was observed when a quadratic constitutive relation is used in the turbulence modeling; however, using time-accurate simulation seemed to alleviate this issue. With the nacelle/pylon, no evidence of multiple solutions was observed. Laminar-turbulent transition modeling was applied to both JSM configuration, and had an overall favorable impact on the lift predictions.

  15. The next-generation ESL continuum gyrokinetic edge code

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.

    2009-05-01

    The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.

  16. Assessing fitness-for-duty and predicting performance with cognitive neurophysiological measures

    NASA Astrophysics Data System (ADS)

    Smith, Michael E.; Gevins, Alan

    2005-05-01

    Progress is described in developing a novel test of neurocognitive status for fitness-for-duty testing. The Sustained Attention & Memory (SAM) test combines neurophysiologic (EEG) measures of brain activation with performance measures during a psychometric test of sustained attention and working memory, and then gauges changes in neurocognitive status relative to an individual"s normative baseline. In studies of the effects of common psychoactive substances that can affect job performance, including sedating antihistamines, caffeine, alcohol, marijuana, and prescription medications, test sensitivity was greater for the combined neurophysiological and performance measures than for task performance measures by themselves. The neurocognitive effects of overnight sleep deprivation were quite evident, and such effects predicted subsequent performance impairment on a flight simulator task. Sensitivity to diurnal circadian variations was also demonstrated. With further refinement and independent validation, the SAM Test may prove useful for assessing readiness-to-perform in high-asset personnel working in demanding, high risk situations.

  17. WEST-3 wind turbine simulator development. Volume 2: Verification

    NASA Technical Reports Server (NTRS)

    Sridhar, S.

    1985-01-01

    The details of a study to validate WEST-3, a new time wind turbine simulator developed by Paragib Pacific Inc., are presented in this report. For the validation, the MOD-0 wind turbine was simulated on WEST-3. The simulation results were compared with those obtained from previous MOD-0 simulations, and with test data measured during MOD-0 operations. The study was successful in achieving the major objective of proving that WEST-3 yields results which can be used to support a wind turbine development process. The blade bending moments, peak and cyclic, from the WEST-3 simulation correlated reasonably well with the available MOD-0 data. The simulation was also able to predict the resonance phenomena observed during MOD-0 operations. Also presented in the report is a description and solution of a serious numerical instability problem encountered during the study. The problem was caused by the coupling of the rotor and the power train models. The results of the study indicate that some parts of the existing WEST-3 simulation model may have to be refined for future work; specifically, the aerodynamics and procedure used to couple the rotor model with the tower and the power train models.

  18. Effect of elevation resolution on evapotranspiration simulations using MODFLOW.

    PubMed

    Kambhammettu, B V N P; Schmid, Wolfgang; King, James P; Creel, Bobby J

    2012-01-01

    Surface elevations represented in MODFLOW head-dependent packages are usually derived from digital elevation models (DEMs) that are available at much high resolution. Conventional grid refinement techniques to simulate the model at DEM resolution increases computational time, input file size, and in many cases are not feasible for regional applications. This research aims at utilizing the increasingly available high resolution DEMs for effective simulation of evapotranspiration (ET) in MODFLOW as an alternative to grid refinement techniques. The source code of the evapotranspiration package is modified by considering for a fixed MODFLOW grid resolution and for different DEM resolutions, the effect of variability in elevation data on ET estimates. Piezometric head at each DEM cell location is corrected by considering the gradient along row and column directions. Applicability of the research is tested for the lower Rio Grande (LRG) Basin in southern New Mexico. The DEM at 10 m resolution is aggregated to resampled DEM grid resolutions which are integer multiples of MODFLOW grid resolution. Cumulative outflows and ET rates are compared at different coarse resolution grids. Results of the analysis conclude that variability in depth-to-groundwater within the MODFLOW cell is a major contributing parameter to ET outflows in shallow groundwater regions. DEM aggregation methods for the LRG Basin have resulted in decreased volumetric outflow due to the formation of a smoothing error, which lowered the position of water table to a level below the extinction depth. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  19. An Instrumented Glove to Assess Manual Dexterity in Simulation-Based Neurosurgical Education

    PubMed Central

    Lemos, Juan Diego; Hernandez, Alher Mauricio; Soto-Romero, Georges

    2017-01-01

    The traditional neurosurgical apprenticeship scheme includes the assessment of trainee’s manual skills carried out by experienced surgeons. However, the introduction of surgical simulation technology presents a new paradigm where residents can refine surgical techniques on a simulator before putting them into practice in real patients. Unfortunately, in this new scheme, an experienced surgeon will not always be available to evaluate trainee’s performance. For this reason, it is necessary to develop automatic mechanisms to estimate metrics for assessing manual dexterity in a quantitative way. Authors have proposed some hardware-software approaches to evaluate manual dexterity on surgical simulators. This paper presents IGlove, a wearable device that uses inertial sensors embedded on an elastic glove to capture hand movements. Metrics to assess manual dexterity are estimated from sensors signals using data processing and information analysis algorithms. It has been designed to be used with a neurosurgical simulator called Daubara NS Trainer, but can be easily adapted to another benchtop- and manikin-based medical simulators. The system was tested with a sample of 14 volunteers who performed a test that was designed to simultaneously evaluate their fine motor skills and the IGlove’s functionalities. Metrics obtained by each of the participants are presented as results in this work; it is also shown how these metrics are used to automatically evaluate the level of manual dexterity of each volunteer. PMID:28468268

  20. A multiprocessor operating system simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, G.M.; Campbell, R.H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT and T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows thatmore » of the Choices family of operating systems for loosely and tightly coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.« less

  1. A new class of finite element variational multiscale turbulence models for incompressible magnetohydrodynamics

    DOE PAGES

    Sondak, D.; Shadid, J. N.; Oberai, A. A.; ...

    2015-04-29

    New large eddy simulation (LES) turbulence models for incompressible magnetohydrodynamics (MHD) derived from the variational multiscale (VMS) formulation for finite element simulations are introduced. The new models include the variational multiscale formulation, a residual-based eddy viscosity model, and a mixed model that combines both of these component models. Each model contains terms that are proportional to the residual of the incompressible MHD equations and is therefore numerically consistent. Moreover, each model is also dynamic, in that its effect vanishes when this residual is small. The new models are tested on the decaying MHD Taylor Green vortex at low and highmore » Reynolds numbers. The evaluation of the models is based on comparisons with available data from direct numerical simulations (DNS) of the time evolution of energies as well as energy spectra at various discrete times. Thus a numerical study, on a sequence of meshes, is presented that demonstrates that the large eddy simulation approaches the DNS solution for these quantities with spatial mesh refinement.« less

  2. Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2014-01-01

    Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.

  3. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  4. Figuring Out Gas in Galaxies In Enzo (FOGGIE): Resolving the Inner Circumgalactic Medium

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren; Peeples, Molly; Tumlinson, Jason; O'Shea, Brian; Smith, Britton

    2018-01-01

    Cosmological hydrodynamical simulations using every common numerical method have struggled to reproduce the multiphase nature of the circumgalactic medium (CGM) revealed by recent observations. However, to date, resolution in these simulations has been aimed at dense regions — the galactic disk and in-falling satellites — while the diffuse CGM never reaches comparable levels of refinement. Taking advantage of the flexible grid structure of the adaptive mesh refinement code Enzo, we force refinement in a region of the CGM of a Milky Way-like galaxy to the same spatial resolution as that of the disk. In this talk, I will present how the physical and structural distributions of the circumgalactic gas change dramatically as a function of the resolution alone. I will also show the implications these changes have for the observational properties of the gas in the context of the observations.

  5. Use of Molecular Dynamics for the Refinement of an Electrostatic Model for the In Silico Design of a Polymer Antidote for the Anticoagulant Fondaparinux

    PubMed Central

    Kwok, Ezra; Gopaluni, Bhushan; Kizhakkedathu, Jayachandran N.

    2013-01-01

    Molecular dynamics (MD) simulations results are herein incorporated into an electrostatic model used to determine the structure of an effective polymer-based antidote to the anticoagulant fondaparinux. In silico data for the polymer or its cationic binding groups has not, up to now, been available, and experimental data on the structure of the polymer-fondaparinux complex is extremely limited. Consequently, the task of optimizing the polymer structure is a daunting challenge. MD simulations provided a means to gain microscopic information on the interactions of the binding groups and fondaparinux that would have otherwise been inaccessible. This was used to refine the electrostatic model and improve the quantitative model predictions of binding affinity. Once refined, the model provided guidelines to improve electrostatic forces between candidate polymers and fondaparinux in order to increase association rate constants. PMID:27006916

  6. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  7. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  8. Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes

    NASA Astrophysics Data System (ADS)

    Capuano, M.; Bogey, C.; Spelt, P. D. M.

    2018-05-01

    A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.

  9. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    ξi be the Legendre -Gauss-Lobatto (LGL) points defined as the roots of (1 − ξ2)P ′N (ξ) = 0, where PN (ξ) is the N th order Legendre polynomial . The...mesh refinement. By expanding the solution in a basis of high order polynomials in each element, one can dynamically adjust the order of these basis...on refining the mesh while keeping the polynomial order constant across the elements. If we choose to allow non-conforming elements, the challenge in

  10. Low-Cost, Robust, Threat-Aware Wireless Sensor Network for Assuring the Nation's Energy Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carols H. Rentel

    2007-03-31

    Eaton, in partnership with Oak Ridge National Laboratory and the Electric Power Research Institute (EPRI) has completed a project that applies a combination of wireless sensor network (WSN) technology, anticipatory theory, and a near-term value proposition based on diagnostics and process uptime to ensure the security and reliability of critical electrical power infrastructure. Representatives of several Eaton business units have been engaged to ensure a viable commercialization plan. Tennessee Valley Authority (TVA), American Electric Power (AEP), PEPCO, and Commonwealth Edison were recruited as partners to confirm and refine the requirements definition from the perspective of the utilities that actually operatemore » the facilities to be protected. Those utilities have cooperated with on-site field tests as the project proceeds. Accomplishments of this project included: (1) the design, modeling, and simulation of the anticipatory wireless sensor network (A-WSN) that will be used to gather field information for the anticipatory application, (2) the design and implementation of hardware and software prototypes for laboratory and field experimentation, (3) stack and application integration, (4) develop installation and test plan, and (5) refinement of the commercialization plan.« less

  11. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  12. The new program OPAL for molecular dynamics simulations and energy refinements of biological macromolecules.

    PubMed

    Luginbühl, P; Güntert, P; Billeter, M; Wüthrich, K

    1996-09-01

    A new program for molecular dynamics (MD) simulation and energy refinement of biological macromolecules, OPAL, is introduced. Combined with the supporting program TRAJEC for the analysis of MD trajectories, OPAL affords high efficiency and flexibility for work with different force fields, and offers a user-friendly interface and extensive trajectory analysis capabilities. Salient features are computational speeds of up to 1.5 GFlops on vector supercomputers such as the NEC SX-3, ellipsoidal boundaries to reduce the system size for studies in explicit solvents, and natural treatment of the hydrostatic pressure. Practical applications of OPAL are illustrated with MD simulations of pure water, energy minimization of the NMR structure of the mixed disulfide of a mutant E. coli glutaredoxin with glutathione in different solvent models, and MD simulations of a small protein, pheromone Er-2, using either instantaneous or time-averaged NMR restraints, or no restraints.

  13. Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2017-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a flexible UH-60A rotor in forward flight, where the rotor wake interacts with the rotor blades. These flow conditions involved blade vortex interaction and dynamic stall, two common conditions that occur as modern helicopter designs strive to achieve greater flight speeds and payload capacity. These numerical simulations utilized high-order spatial accuracy and delayed detached eddy simulation. Emphasis was placed on understanding how improved rotor wake resolution affects the prediction of the normal force, pitching moment, and chord force of the rotor. Adaptive mesh refinement was used to highly resolve the turbulent rotor wake in a computationally efficient manner. Moreover, blade vortex interaction was found to trigger dynamic stall. Time-dependent flow visualization was utilized to provide an improved understanding of the numerical and physical mechanisms involved with three-dimensional dynamic stall.

  14. Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks

    NASA Astrophysics Data System (ADS)

    Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi

    We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.

  15. Improving an Assessment of Tidal Stream Energy Resource for Anchorage, Alaska

    NASA Astrophysics Data System (ADS)

    Xu, T.; Haas, K. A.

    2016-12-01

    Increasing global energy demand is driving the pursuit of new and innovative energy sources leading to the need for assessing and utilizing alternative, productive and reliable energy resources. Tidal currents, characterized by periodicity and predictability, have long been explored and studied as a potential energy source, focusing on many different locations with significant tidal ranges. However, a proper resource assessment cannot be accomplished without accurate knowledge of the spatial-temporal distribution and availability of tidal currents. Known for possessing one of the top tidal energy sources along the U.S. coastline, Cook Inlet, Alaska is the area of interest for this project. A previous regional scaled resource assessment has been completed, however, the present study is to focus the assessment on the available power specifically near Anchorage while significantly improving the accuracy of the assessment following IEC guidelines. The Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system is configured to simulate the tidal flows with grid refinement techniques for a minimum of 32 days, encompassing an entire lunar cycle. Simulation results are validated by extracting tidal constituents with harmonic analysis and comparing tidal components with National Oceanic and Atmospheric Administration (NOAA) observations and predictions. Model calibration includes adjustments to bottom friction coefficients and the usage of different tidal database. Differences between NOAA observations and COAWST simulations after applying grid refinement decrease, compared with results from a former study without grid refinement. Also, energy extraction is simulated at potential sites to study the impact on the tidal resources. This study demonstrates the enhancement of the resource assessment using grid refinement to evaluate tidal energy near Anchorage within Cook Inlet, Alaska, the productivity that energy extraction can achieve and the change in tidal currents caused by energy extraction.

  16. A Comparison of the Behaviour of AlTiB and AlTiC Grain Refiners

    NASA Astrophysics Data System (ADS)

    Schneider, W.; Kearns, M. A.; McGarry, M. J.; Whitehead, A. J.

    AlTiC master alloys present a new alternative to AlTiB grain refiners which have enjoyed pre-eminence in cast houses for several decades. Recent investigations have shown that, under defined casting conditions, AlTiC is a more efficient grain refiner than AlTiB, is less prone to agglomeration and is more resistant to poisoning by Zr, Cr. Moreover it is observed that there are differences in the mechanism of grain refinement for the different alloys. This paper describes the influence of melt temperature and addition rate on the performance of both types of grain refiner in DC casting tests on different wrought alloys. Furthermore the effects of combined additions of the grain refiners and the recycling behaviour of the treated alloys are presented. Results are compared with laboratory test data. Finally, mechanisms of grain refinement are discussed which are consistent with the observed differences in behaviour with AlTiC and AlTiB.

  17. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows

    NASA Astrophysics Data System (ADS)

    Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.

    2017-11-01

    In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.

  18. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE PAGES

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...

    2017-09-01

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  19. Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for turbulent fluid-particle flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Ravi G.; Desjardins, Olivier; Kong, Bo

    Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less

  20. Numerical analysis of impurity separation from waste salt by investigating the change of concentration at the interface during zone refining process

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Gil; Shim, Moonsoo; Lee, Jong-Hyeon; Yi, Kyung-Woo

    2017-09-01

    The waste salt treatment process is required for the reuse of purified salts, and for the disposal of the fission products contained in waste salt during pyroprocessing. As an alternative to existing fission product separation methods, the horizontal zone refining process is used in this study for the purification of waste salt. In order to evaluate the purification ability of the process, three-dimensional simulation is conducted, considering heat transfer, melt flow, and mass transfer. Impurity distributions and decontamination factors are calculated as a function of the heater traverse rate, by applying a subroutine and the equilibrium segregation coefficient derived from the effective segregation coefficients. For multipass cases, 1d solutions and the effective segregation coefficient obtained from three-dimensional simulation are used. In the present study, the topic is not dealing with crystal growth, but the numerical technique used is nearly the same since the zone refining technique was just introduced in the treatment of waste salt from nuclear power industry because of its merit of simplicity and refining ability. So this study can show a new application of single crystal growth techniques to other fields, by taking advantage of the zone refining multipass possibility. The final goal is to achieve the same high degree of decontamination in the waste salt as in zone freezing (or reverse Bridgman) method.

  1. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  2. Refined BCF-type boundary conditions for mesoscale surface step dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Renjie; Ackerman, David M.; Evans, James W.

    Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for stepmore » structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations.« less

  3. Refined BCF-type boundary conditions for mesoscale surface step dynamics

    DOE PAGES

    Zhao, Renjie; Ackerman, David M.; Evans, James W.

    2015-06-24

    Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for stepmore » structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations.« less

  4. Advanced Acid Gas Separation Technology for Clean Power and Syngas Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy, Fabrice; Hufton, Jeffrey; Bhadra, Shubhra

    2015-06-30

    Air Products has developed an acid gas removal technology based on adsorption (Sour PSA) that favorably compares with incumbent AGR technologies. During this DOE-sponsored study, Air Products has been able to increase the Sour PSA technology readiness level by successfully operating a two-bed test system on coal-derived sour syngas at the NCCC, validating the lifetime and performance of the adsorbent material. Both proprietary simulation and data obtained during the testing at NCCC were used to further refine the estimate of the performance of the Sour PSA technology when expanded to a commercial scale. In-house experiments on sweet syngas combined withmore » simulation work allowed Air Products to develop new PSA cycles that allowed for further reduction in capital expenditure. Finally our techno economic analysis of the use the Sour PSA technology for both IGCC and coal-to-methanol applications suggests significant improvement of the unit cost of electricity and methanol compared to incumbent AGR technologies.« less

  5. Refined beam measurements on the SNS H- injector

    NASA Astrophysics Data System (ADS)

    Han, B. X.; Welton, R. F.; Murray, S. N.; Pennisi, T. R.; Santana, M.; Stinson, C. M.; Stockli, M. P.

    2017-08-01

    The H- injector for the SNS RFQ accelerator consists of an RF-driven, Cs-enhanced H- ion source and a compact, two-lens electrostatic LEBT. The LEBT output and the RFQ input beam current are measured by deflecting the beam on to an annular plate at the RFQ entrance. Our method and procedure have recently been refined to improve the measurement reliability and accuracy. The new measurements suggest that earlier measurements tended to underestimate the currents by 0-2 mA, but essentially confirm H- beam currents of 50-60 mA being injected into the RFQ. Emittance measurements conducted on a test stand featuring essentially the same H- injector setup show that the normalized rms emittance with 0.5% threshold (99% inclusion of the total beam) is in a range of 0.25-0.4 mm.mrad for a 50-60 mA beam. The RFQ output current is monitored with a BCM toroid. Measurements as well as simulations with the PARMTEQ code indicate an underperforming transmission of the RFQ since around 2012.

  6. SNV-PPILP: refined SNV calling for tumor data using perfect phylogenies and ILP.

    PubMed

    van Rens, Karen E; Mäkinen, Veli; Tomescu, Alexandru I

    2015-04-01

    Recent studies sequenced tumor samples from the same progenitor at different development stages and showed that by taking into account the phylogeny of this development, single-nucleotide variant (SNV) calling can be improved. Accurate SNV calls can better reveal early-stage tumors, identify mechanisms of cancer progression or help in drug targeting. We present SNV-PPILP, a fast and easy to use tool for refining GATK's Unified Genotyper SNV calls, for multiple samples assumed to form a phylogeny. We tested SNV-PPILP on simulated data, with a varying number of samples, SNVs, read coverage and violations of the perfect phylogeny assumption. We always match or improve the accuracy of GATK, with a significant improvement on low read coverage. SNV-PPILP, available at cs.helsinki.fi/gsa/snv-ppilp/, is written in Python and requires the free ILP solver lp_solve. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Refinement of the experimental dynamic structure factor for liquid para-hydrogen and ortho-deuterium using semi-classical quantum simulation.

    PubMed

    Smith, Kyle K G; Poulsen, Jens Aage; Cunsolo, A; Rossky, Peter J

    2014-01-21

    The dynamic structure factor of liquid para-hydrogen and ortho-deuterium in corresponding thermodynamic states (T = 20.0 K, n = 21.24 nm(-3)) and (T = 23.0 K, n = 24.61 nm(-3)), respectively, has been computed by both the Feynman-Kleinert linearized path-integral (FK-LPI) and Ring-Polymer Molecular Dynamics (RPMD) methods and compared with Inelastic X Ray Scattering spectra. The combined use of computational and experimental methods enabled us to reduce experimental uncertainties in the determination of the true sample spectrum. Furthermore, the refined experimental spectrum of para-hydrogen and ortho-deuterium is consistently reproduced by both FK-LPI and RPMD results at momentum transfers lower than 12.8 nm(-1). At larger momentum transfers the FK-LPI results agree with experiment much better for ortho-deuterium than for para-hydrogen. More specifically we found that for k ∼ 20.0 nm(-1) para-hydrogen provides a test case for improved approximations to quantum dynamics.

  8. Mimicking the action of folding chaperones by Hamiltonian replica-exchange molecular dynamics simulations: application in the refinement of de novo models.

    PubMed

    Fan, Hao; Periole, Xavier; Mark, Alan E

    2012-07-01

    The efficiency of using a variant of Hamiltonian replica-exchange molecular dynamics (Chaperone H-replica-exchange molecular dynamics [CH-REMD]) for the refinement of protein structural models generated de novo is investigated. In CH-REMD, the interaction between the protein and its environment, specifically, the electrostatic interaction between the protein and the solvating water, is varied leading to cycles of partial unfolding and refolding mimicking some aspects of folding chaperones. In 10 of the 15 cases examined, the CH-REMD approach sampled structures in which the root-mean-square deviation (RMSD) of secondary structure elements (SSE-RMSD) with respect to the experimental structure was more than 1.0 Å lower than the initial de novo model. In 14 of the 15 cases, the improvement was more than 0.5 Å. The ability of three different statistical potentials to identify near-native conformations was also examined. Little correlation between the SSE-RMSD of the sampled structures with respect to the experimental structure and any of the scoring functions tested was found. The most effective scoring function tested was the DFIRE potential. Using the DFIRE potential, the SSE-RMSD of the best scoring structures was on average 0.3 Å lower than the initial model. Overall the work demonstrates that targeted enhanced-sampling techniques such as CH-REMD can lead to the systematic refinement of protein structural models generated de novo but that improved potentials for the identification of near-native structures are still needed. Copyright © 2012 Wiley Periodicals, Inc.

  9. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... an exemption under this section, upon notification from EPA, the refiner's exemption will be void ab initio. (b) Sampling methods. For purposes of paragraph (a) of this section, refiners and importers shall...

  10. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... an exemption under this section, upon notification from EPA, the refiner's exemption will be void ab initio. (b) Sampling methods. For purposes of paragraph (a) of this section, refiners and importers shall...

  11. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... an exemption under this section, upon notification from EPA, the refiner's exemption will be void ab initio. (b) Sampling methods. For purposes of paragraph (a) of this section, refiners and importers shall...

  12. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... an exemption under this section, upon notification from EPA, the refiner's exemption will be void ab initio. (b) Sampling methods. For purposes of paragraph (a) of this section, refiners and importers shall...

  13. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... an exemption under this section, upon notification from EPA, the refiner's exemption will be void ab initio. (b) Sampling methods. For purposes of paragraph (a) of this section, refiners and importers shall...

  14. A cellular automaton - finite volume method for the simulation of dendritic and eutectic growth in binary alloys using an adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar

    2017-11-01

    A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.

  15. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  16. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  17. Refinement of the probability density function model for preferential concentration of aerosol particles in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Zaichik, Leonid I.; Alipchenkov, Vladimir M.

    2007-11-01

    The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.

  18. Development of a fuel cell plug-in hybrid electric vehicle and vehicle simulator for energy management assessment

    NASA Astrophysics Data System (ADS)

    Meintz, Andrew Lee

    This dissertation offers a description of the development of a fuel cell plug-in hybrid electric vehicle focusing on the propulsion architecture selection, propulsion system control, and high-level energy management. Two energy management techniques have been developed and implemented for real-time control of the vehicle. The first method is a heuristic method that relies on a short-term moving average of the vehicle power requirements. The second method utilizes an affine function of the short-term and long-term moving average vehicle power requirements. The development process of these methods has required the creation of a vehicle simulator capable of estimating the effect of changes to the energy management control techniques on the overall vehicle energy efficiency. Furthermore, the simulator has allowed for the refinement of the energy management methods and for the stability of the method to be analyzed prior to on-road testing. This simulator has been verified through on-road testing of a constructed prototype vehicle under both highway and city driving schedules for each energy management method. The results of the finalized vehicle control strategies are compared with the simulator predictions and an assessment of the effectiveness of both strategies is discussed. The methods have been evaluated for energy consumption in the form of both hydrogen fuel and stored electricity from grid charging.

  19. Equipment. [for testing human space perception

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A discussion is presented on the problems encountered in designing and constructing a simulator to determine human vestibular response to a range of linear accelerations from 0 to 0.3 g's. Starting with a set of initial performance specifications, the designers combined an array of commercially available components into a system which, altough requiring further refinement before completion, shows considerable promise of fulfilling the initial requirements. The resulting system consists of a wheeled vehicle driven by a cable and drum arrangement, powered by a hydraulic-electric servo-valve. Technical design considerations are presented along with a discussion of the trade-offs between various component options. A description of the system characteristics as well as an analysis of preliminary test results and recommendations for future system improvements are included.

  20. Investigation of cloud properties and atmospheric stability with MODIS

    NASA Technical Reports Server (NTRS)

    Menzel, P.; Ackerman, S.; Moeller, C.; Gumley, L.; Strabala, K.; Frey, R.; Prins, E.; LaPorte, D.; Lynch, M.

    1996-01-01

    The last half year was spent in preparing Version 1 software for delivery, and culminated in transfer of the Level 2 cloud mask production software to the SDST in April. A simulated MODIS test data set with good radiometric integrity was produced using MAS data for a clear ocean scene. ER-2 flight support and MAS data processing were provided by CIMSS personnel during the Apr-May 96 SUCCESS field campaign in Salina, Kansas. Improvements have been made in the absolute calibration of the MAS, including better characterization of the spectral response for all 50 channels. Plans were laid out for validating and testing the MODIS calibration techniques; these plans were further refined during a UW calibration meeting with MCST.

  1. Clinical Outcome Metrics for Optimization of Robust Training

    NASA Technical Reports Server (NTRS)

    Ebert, D.; Byrne, V. E.; McGuire, K. M.; Hurst, V. W., IV; Kerstman, E. L.; Cole, R. W.; Sargsyan, A. E.; Garcia, K. M.; Reyes, D.; Young, M.

    2016-01-01

    Introduction: The emphasis of this research is on the Human Research Program (HRP) Exploration Medical Capability's (ExMC) "Risk of Unacceptable Health and Mission Outcomes Due to Limitations of In-Flight Medical Capabilities." Specifically, this project aims to contribute to the closure of gap ExMC 2.02: We do not know how the inclusion of a physician crew medical officer quantitatively impacts clinical outcomes during exploration missions. The experiments are specifically designed to address clinical outcome differences between physician and non-physician cohorts in both near-term and longer-term (mission impacting) outcomes. Methods: Medical simulations will systematically compare success of individual diagnostic and therapeutic procedure simulations performed by physician and non-physician crew medical officer (CMO) analogs using clearly defined short-term (individual procedure) outcome metrics. In the subsequent step of the project, the procedure simulation outcomes will be used as input to a modified version of the NASA Integrated Medical Model (IMM) to analyze the effect of the outcome (degree of success) of individual procedures (including successful, imperfectly performed, and failed procedures) on overall long-term clinical outcomes and the consequent mission impacts. The procedures to be simulated are endotracheal intubation, fundoscopic examination, kidney/urinary ultrasound, ultrasound-guided intravenous catheter insertion, and a differential diagnosis exercise. Multiple assessment techniques will be used, centered on medical procedure simulation studies occurring at 3, 6, and 12 months after initial training (as depicted in the following flow diagram of the experiment design). Discussion: Analysis of procedure outcomes in the physician and non-physician groups and their subsets (tested at different elapsed times post training) will allow the team to 1) define differences between physician and non-physician CMOs in terms of both procedure performance (pre-IMM analysis) and overall mitigation of the mission medical impact (IMM analysis); 2) refine the procedure outcome and clinical outcome metrics themselves; 3) refine or develop innovative medical training products and solutions to maximize CMO performance; and 4) validate the methods and products of this experiment for operational use in the planning, execution, and quality assurance of the CMO training process The team has finalized training protocols and developed a software training/testing tool in collaboration with Butler Graphics (Detroit, MI). In addition to the "hands on" medical procedure modules, the software includes a differential diagnosis exercise (limited clinical decision support tool) to evaluate the diagnostic skills of participants. Human subject testing will occur over the next year.

  2. Testing Refinement Criteria in Adaptive Discontinuous Galerkin Simulations of Dry Atmospheric Convection

    DTIC Science & Technology

    2011-12-22

    matrix Mik = ∫ Ωe ψiψkdΩ; for the sake of simplicity, we did not write the dependence on x of the basis functions although it should be understood that the...polynomial order N throughout all the elements Ωe in the domain Ω = ⋃Ne e =1 Ωe and if we insist that the elements have straight edges, then the matrix M−1...µlim to change between different elements. The total viscosity parameter for each element e is given by µe = max (µtc, µlim, e ) , (25) 7 where µtc is

  3. Design principles for simulation games for learning clinical reasoning: A design-based research approach.

    PubMed

    Koivisto, J-M; Haavisto, E; Niemi, H; Haho, P; Nylund, S; Multisilta, J

    2018-01-01

    Nurses sometimes lack the competence needed for recognising deterioration in patient conditions and this is often due to poor clinical reasoning. There is a need to develop new possibilities for learning this crucial competence area. In addition, educators need to be future oriented; they need to be able to design and adopt new pedagogical innovations. The purpose of the study is to describe the development process and to generate principles for the design of nursing simulation games. A design-based research methodology is applied in this study. Iterative cycles of analysis, design, development, testing and refinement were conducted via collaboration among researchers, educators, students, and game designers. The study facilitated the generation of reusable design principles for simulation games to guide future designers when designing and developing simulation games for learning clinical reasoning. This study makes a major contribution to research on simulation game development in the field of nursing education. The results of this study provide important insights into the significance of involving nurse educators in the design and development process of educational simulation games for the purpose of nursing education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Coupled field effects in BWR stability simulations using SIMULATE-3K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borkowski, J.; Smith, K.; Hagrman, D.

    1996-12-31

    The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.

  5. SU-E-T-13: A Feasibility Study of the Use of Hybrid Computational Phantoms for Improved Historical Dose Reconstruction in the Study of Late Radiation Effects for Hodgkin's Lymphoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petroccia, H; O'Reilly, S; Bolch, W

    Purpose: Radiation-induced cancer effects are well-documented following radiotherapy. Further investigation is needed to more accurately determine a dose-response relationship for late radiation effects. Recent dosimetry studies tend to use representative patients (Taylor 2009) or anthropomorphic phantoms (Wirth 2008) for estimating organ mean doses. In this study, we compare hybrid computational phantoms to patient-specific voxel phantoms to test the accuracy of University of Florida Hybrid Phantom Library (UFHP Library) for historical dose reconstructions. Methods: A cohort of 10 patients with CT images was used to reproduce the data that was collected historically for Hodgkin's lymphoma patients (i.e. caliper measurements and photographs).more » Four types of phantoms were generated to show a range of refinement from reference hybrid-computational phantom to patient-specific phantoms. Each patient is matched to a reference phantom from the UFHP Library based on height and weight. The reference phantom is refined in the anterior/posterior direction to create a ‘caliper-scaled phantom’. A photograph is simulated using a surface rendering from segmented CT images. Further refinement in the lateral direction is performed using ratios from a simulated-photograph to create a ‘photograph and caliper-scaled phantom’; breast size and position is visually adjusted. Patient-specific hybrid phantoms, with matched organ volumes, are generated and show the capabilities of the UF Hybrid Phantom Library. Reference, caliper-scaled, photograph and caliper-scaled, and patient-specific hybrid phantoms are compared with patient-specific voxel phantoms to determine the accuracy of the study. Results: Progression from reference phantom to patient specific hybrid shows good agreement with the patient specific voxel phantoms. Each stage of refinement shows an overall trend of improvement in dose accuracy within the study, which suggests that computational phantoms can show improved accuracy in historical dose estimates. Conclusion: Computational hybrid phantoms show promise for improved accuracy within retrospective studies when CTs and other x-ray images are not available.« less

  6. Updating source term and atmospheric dispersion simulations for the dose reconstruction in Fukushima Daiichi Nuclear Power Station Accident

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku

    2017-09-01

    In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.

  7. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1992-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  8. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1991-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  9. Refinements to the Graves and Pitarka (2010) Broadband Ground Motion Simulation Method

    USGS Publications Warehouse

    Graves, Robert; Arben Pitarka,

    2015-01-01

    This brief article describes refinements to the Graves and Pitarka (2010) broadband ground motion simulation methodology (GP2010 hereafter) that have been implemented in version 14.3 of the SCEC Broadband Platform (BBP). The updated version of our method on the current SCEC BBP is referred to as GP14.3. Our simulation technique is a hybrid approach that combines low-­‐frequency and high-­‐frequency motions computed with different methods into a single broadband response. The separate low-­‐ and high-­‐frequency components have traditionally been called “deterministic” and “stochastic”, respectively; however, this nomenclature is an oversimplification. In reality, the low-­‐frequency approach includes many stochastic elements, and likewise, the high-­‐frequency approach includes many deterministic elements (e.g., Pulido and Kubo, 2004; Hartzell et al., 2005; Liu et al., 2006; Frankel, 2009; Graves and Pitarka, 2010; Mai et al., 2010). While the traditional terminology will likely remain in use by the broader modeling community, in this paper we will refer to these using the generic terminology “low-­‐frequency” and “high-­‐ frequency” approaches. Furthermore, one of the primary goals in refining our methodology is to provide a smoother and more consistent transition between the low-­‐ and high-­‐ frequency calculations, with the ultimate objective being the development of a single unified modeling approach that can be applied over a broad frequency band. GP2010 was validated by modeling recorded strong motions from four California earthquakes. While the method performed well overall, several issues were identified including the tendency to over-­‐predict the level of longer period (2-­‐5 sec) motions and the effects of rupture directivity. The refinements incorporated in GP14.3 are aimed at addressing these issues with application to the simulation of earthquakes in Western US (WUS). These refinements include the addition of a deep weak zone (details in following section) to the rupture characterization and allowing perturbations in the correlation of rise time and rupture speed with the specified slip distribution. Additionally, we have extended the parameterization of GP14.3 so that it is also applicable for simulating Eastern North America (ENA) earthquakes. This work has been guided by the comprehensive set of validation studies described in Goulet and Abrahamson (2014) and Dreger et al. (2014). The GP14.3 method shows improved performance relative to GP2010, and we direct the interested reader to Dreger et al. (2014) for a detailed assessment of the current methodology. In this paper, we concentrate on describing the modifications in more detail, and also discussing additional refinements that are currently being developed.

  10. Numerical Zooming Between a NPSS Engine System Simulation and a One-Dimensional High Compressor Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, Gregory; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer along with the concept of numerical zooming between zero-dimensional to one-, two-, and three-dimensional component engine codes. In addition, the NPSS is refining the computing and communication technologies necessary to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Of the different technology areas that contribute to the development of the NPSS Environment, the subject of this paper is a discussion on numerical zooming between a NPSS engine simulation and higher fidelity representations of the engine components (fan, compressor, burner, turbines, etc.). What follows is a description of successfully zooming one-dimensional (row-by-row) high-pressure compressor analysis results back to a zero-dimensional NPSS engine simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the capability of the engine system simulation and increase the level of virtual test conducted prior to committing the design to hardware.

  11. Data-directed RNA secondary structure prediction using probabilistic modeling

    PubMed Central

    Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon

    2016-01-01

    Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549

  12. Refinement and Pattern Formation in Neural Circuits by the Interaction of Traveling Waves with Spike-Timing Dependent Plasticity

    PubMed Central

    Bennett, James E. M.; Bair, Wyeth

    2015-01-01

    Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli. PMID:26308406

  13. Refinement and Pattern Formation in Neural Circuits by the Interaction of Traveling Waves with Spike-Timing Dependent Plasticity.

    PubMed

    Bennett, James E M; Bair, Wyeth

    2015-08-01

    Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli.

  14. Toward Improving Predictability of Extreme Hydrometeorological Events: the Use of Multi-scale Climate Modeling in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.

    2014-12-01

    Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.

  15. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  16. Modelling and scale-up of chemical flooding: First annual report for the period October 1985-September 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G.A.; Lake, L.W.; Sepehrnoori, K.

    1987-07-01

    This report consists of three parts. Part A describes the development of our chemical flood simulator UTCHEM during the past year, simulation studies, and physical property modelling and experiments. Part B is a report on the optimization and vectorization of UTCHEM on our Cray supercomputer to speed it up. Part C describes our use of UTCHEM to investigate the use of tracers for interwell reservoir tests. Part A of this Annual Report consists of five sections. In the first section, we give a general description of the simulator and recent changes in it along with a test case for amore » slightly compressible fluid. In the second section, we describe the major changes which were needed to add gel and alkaline reactions and give preliminary simulation results for these processes. In the third section, comparisons with a surfactant pilot field test are given. In the fourth section, process scaleup and design simulations are given and also our recent mesh refinement results. In the fifth section, experimental results and associated physical property modelling studies are reported. Part B gives our results on the speedup of UTCHEM on a Cray supercomputer. Depending on the size of the problem, this speedup factor was at least tenfold and resulted from a combination of a faster solver, vectorization, and code optimization. Part C describes our use of UTCHEM for field tracer studies and gives the results of a comparison with field tracer data on the same field (Big Muddy) as was simulated and compared with the surfactant pilot reported in section 3 of Part A. 120 figs., 37 tabs.« less

  17. Fuel Combustion and Engine Performance | Transportation Research | NREL

    Science.gov Websites

    . Through modeling, simulation, and experimental validation, researchers examine what happens to fuel inside combustion and engine research activities include: Developing experimental and simulation research platforms develop and refine accurate, efficient kinetic mechanisms for fuel ignition Investigating low-speed pre

  18. Large-eddy simulation of the passage of a shock wave through homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2017-11-01

    The passage of a nominally plane shockwave through homogeneous, compressible turbulence is a canonical problem representative of flows seen in supernovae, supersonic combustion engines, and inertial confinement fusion. The interaction of isotropic turbulence with a stationary normal shockwave is considered at inertial range Taylor Reynolds numbers, Reλ = 100 - 2500 , using Large Eddy Simulation (LES). The unresolved, subgrid terms are approximated by the stretched-vortex model (Kosovic et al., 2002), which allows self-consistent reconstruction of the subgrid contributions to the turbulent statistics of interest. The mesh is adaptively refined in the vicinity of the shock to resolve small amplitude shock oscillations, and the implications of mesh refinement on the subgrid modeling are considered. Simulations are performed at a range of shock Mach numbers, Ms = 1.2 - 3.0 , and turbulent Mach numbers, Mt = 0.06 - 0.18 , to explore the parameter space of the interaction at high Reynolds number. The LES shows reasonable agreement with linear analysis and lower Reynolds number direct numerical simulations. LANL Subcontract 305963.

  19. Discrete Molecular Dynamics Approach to the Study of Disordered and Aggregating Proteins.

    PubMed

    Emperador, Agustí; Orozco, Modesto

    2017-03-14

    We present a refinement of the Coarse Grained PACSAB force field for Discrete Molecular Dynamics (DMD) simulations of proteins in aqueous conditions. As the original version, the refined method provides good representation of the structure and dynamics of folded proteins but provides much better representations of a variety of unfolded proteins, including some very large, impossible to analyze by atomistic simulation methods. The PACSAB/DMD method also reproduces accurately aggregation properties, providing good pictures of the structural ensembles of proteins showing a folded core and an intrinsically disordered region. The combination of accuracy and speed makes the method presented here a good alternative for the exploration of unstructured protein systems.

  20. Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code

    NASA Astrophysics Data System (ADS)

    Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2016-06-01

    We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.

  1. Evaluation of processes affecting 1,2-dibromo-3-chloropropane (DBCP) concentrations in ground water in the eastern San Joaquin Valley, California : analysis of chemical data and ground-water flow and transport simulations

    USGS Publications Warehouse

    Burow, Karen R.; Panshin, Sandra Y.; Dubrovsky, Neil H.; Vanbrocklin, David; Fogg, Graham E.

    1999-01-01

    A conceptual two-dimensional numerical flow and transport modeling approach was used to test hypotheses addressing dispersion, transformation rate, and in a relative sense, the effects of ground- water pumping and reapplication of irrigation water on DBCP concentrations in the aquifer. The flow and transport simulations, which represent hypothetical steady-state flow conditions in the aquifer, were used to refine the conceptual understanding of the aquifer system rather than to predict future concentrations of DBCP. Results indicate that dispersion reduces peak concentrations, but this process alone does not account for the apparent decrease in DBCP concentrations in ground water in the eastern San Joaquin Valley. Ground-water pumping and reapplication of irrigation water may affect DBCP concentrations to the extent that this process can be simulated indirectly using first-order decay. Transport simulation results indicate that the in situ 'effective' half-life of DBCP caused by processes other than dispersion and transformation to BAA could be on the order of 6 years.

  2. Homology Modeling of Dopamine D2 and D3 Receptors: Molecular Dynamics Refinement and Docking Evaluation

    PubMed Central

    Platania, Chiara Bianca Maria; Salomone, Salvatore; Leggio, Gian Marco; Drago, Filippo; Bucolo, Claudio

    2012-01-01

    Dopamine (DA) receptors, a class of G-protein coupled receptors (GPCRs), have been targeted for drug development for the treatment of neurological, psychiatric and ocular disorders. The lack of structural information about GPCRs and their ligand complexes has prompted the development of homology models of these proteins aimed at structure-based drug design. Crystal structure of human dopamine D3 (hD3) receptor has been recently solved. Based on the hD3 receptor crystal structure we generated dopamine D2 and D3 receptor models and refined them with molecular dynamics (MD) protocol. Refined structures, obtained from the MD simulations in membrane environment, were subsequently used in molecular docking studies in order to investigate potential sites of interaction. The structure of hD3 and hD2L receptors was differentiated by means of MD simulations and D3 selective ligands were discriminated, in terms of binding energy, by docking calculation. Robust correlation of computed and experimental Ki was obtained for hD3 and hD2L receptor ligands. In conclusion, the present computational approach seems suitable to build and refine structure models of homologous dopamine receptors that may be of value for structure-based drug discovery of selective dopaminergic ligands. PMID:22970199

  3. Performance verification and system parameter identification of spacecraft tape recorder control servo

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, A. K.

    1979-01-01

    Design adequacy of the lead-lag compensator of the frequency loop, accuracy checking of the analytical expression for the electrical motor transfer function, and performance evaluation of the speed control servo of the digital tape recorder used on-board the 1976 Viking Mars Orbiters and Voyager 1977 Jupiter-Saturn flyby spacecraft are analyzed. The transfer functions of the most important parts of a simplified frequency loop used for test simulation are described and ten simulation cases are reported. The first four of these cases illustrate the method of selecting the most suitable transfer function for the hysteresis synchronous motor, while the rest verify and determine the servo performance parameters and alternative servo compensation schemes. It is concluded that the linear methods provide a starting point for the final verification/refinement of servo design by nonlinear time response simulation and that the variation of the parameters of the static/dynamic Coulomb friction is as expected in a long-life space mission environment.

  4. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  5. Tribocharging Lunar Soil for Electrostatic Beneficiation

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Future human lunar habitation requires using in situ materials for both structural components and oxygen production. Lunar bases must be constructed from thermal-and radiation-shielding materials that will provide significant protection from the harmful cosmic energy which normally bombards the lunar surface. In addition, shipping oxygen from Earth is weight-prohibitive, and therefore investigating the production of breathable oxygen from oxidized mineral components is a major ongoing NASA research initiative. Lunar regolith may meet the needs for both structural protection and oxygen production. Already a number of oxygen production technologies are being tested, and full-scale bricks made of lunar simulant have been sintered. The beneficiation, or separation, of lunar minerals into a refined industrial feedstock could make production processes more efficient, requiring less energy to operate and maintain and producing higher-performance end products. The method of electrostatic beneficiation used in this research charges mineral powders (lunar simulant) by contact with materials of a different composition. The simulant acquires either a positive or negative charge depending upon its composition relative to the charging material.

  6. Automatising the analysis of stochastic biochemical time-series

    PubMed Central

    2015-01-01

    Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821

  7. Exploring the neural bases of goal-directed motor behavior using fully resolved simulations

    NASA Astrophysics Data System (ADS)

    Patel, Namu; Patankar, Neelesh A.

    2016-11-01

    Undulatory swimming is an ideal problem for understanding the neural architecture for motor control and movement; a vertebrate's robust morphology and adaptive locomotive gait allows the swimmer to navigate complex environments. Simple mathematical models for neurally activated muscle contractions have been incorporated into a swimmer immersed in fluid. Muscle contractions produce bending moments which determine the swimming kinematics. The neurobiology of goal-directed locomotion is explored using fast, efficient, and fully resolved constraint-based immersed boundary simulations. Hierarchical control systems tune the strength, frequency, and duty cycle for neural activation waves to produce multifarious swimming gaits or synergies. Simulation results are used to investigate why the basal ganglia and other control systems may command a particular neural pattern to accomplish a task. Using simple neural models, the effect of proprioceptive feedback on refining the body motion is demonstrated. Lastly, the ability for a learned swimmer to successfully navigate a complex environment is tested. This work is supported by NSF CBET 1066575 and NSF CMMI 0941674.

  8. Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2015-01-01

    This paper intends to help establish fidelity criteria to accompany the simulator motion system diagnostic test specified by the International Civil Aviation Organization. Twelve air- line transport pilots flew three tasks in the NASA Vertical Motion Simulator under four different motion conditions. The experiment used three different hexapod motion configurations, each with a different tradeoff between motion filter gain and break frequency, and one large motion configuration that utilized as much of the simulator's motion space as possible. The motion condition significantly affected: 1) pilot motion fidelity ratings, and sink rate and lateral deviation at touchdown for the approach and landing task, 2) pilot motion fidelity ratings, roll deviations, maximum pitch rate, and number of stick shaker activations in the stall task, and 3) heading deviation after an engine failure in the takeoff task. Significant differences in pilot-vehicle performance were used to define initial objective motion cueing criteria boundaries. These initial fidelity boundaries show promise but need refinement.

  9. Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.

  10. Estimation of glacier surface motion by robust phase correlation and point like features of SAR intensity images

    NASA Astrophysics Data System (ADS)

    Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe

    2016-11-01

    For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.

  11. Structural refinement of the hERG1 pore and voltage-sensing domains with ROSETTA-membrane and molecular dynamics simulations.

    PubMed

    Subbotina, Julia; Yarov-Yarovoy, Vladimir; Lees-Miller, James; Durdagi, Serdar; Guo, Jiqing; Duff, Henry J; Noskov, Sergei Yu

    2010-11-01

    The hERG1 gene (Kv11.1) encodes a voltage-gated potassium channel. Mutations in this gene lead to one form of the Long QT Syndrome (LQTS) in humans. Promiscuous binding of drugs to hERG1 is known to alter the structure/function of the channel leading to an acquired form of the LQTS. Expectably, creation and validation of reliable 3D model of the channel have been a key target in molecular cardiology and pharmacology for the last decade. Although many models were built, they all were limited to pore domain. In this work, a full model of the hERG1 channel is developed which includes all transmembrane segments. We tested a template-driven de-novo design with ROSETTA-membrane modeling using side-chain placements optimized by subsequent molecular dynamics (MD) simulations. Although backbone templates for the homology modeled parts of the pore and voltage sensors were based on the available structures of KvAP, Kv1.2 and Kv1.2-Kv2.1 chimera channels, the missing parts are modeled de-novo. The impact of several alignments on the structure of the S4 helix in the voltage-sensing domain was also tested. Herein, final models are evaluated for consistency to the reported structural elements discovered mainly on the basis of mutagenesis and electrophysiology. These structural elements include salt bridges and close contacts in the voltage-sensor domain; and the topology of the extracellular S5-pore linker compared with that established by toxin foot-printing and nuclear magnetic resonance studies. Implications of the refined hERG1 model to binding of blockers and channels activators (potent new ligands for channel activations) are discussed. © 2010 Wiley-Liss, Inc.

  12. Numerical modelling of surface waves generated by low frequency electromagnetic field for silicon refinement process

    NASA Astrophysics Data System (ADS)

    Geža, V.; Venčels, J.; Zāģeris, Ģ.; Pavlovs, S.

    2018-05-01

    One of the most perspective methods to produce SoG-Si is refinement via metallurgical route. The most critical part of this route is refinement from boron and phosphorus, therefore, approach under development will address this problem. An approach of creating surface waves on silicon melt’s surface is proposed in order to enlarge its area and accelerate removal of boron via chemical reactions and evaporation of phosphorus. A two dimensional numerical model is created which include coupling of electromagnetic and fluid dynamic simulations with free surface dynamics. First results show behaviour similar to experimental results from literature.

  13. Linkage of mike she to wetland-dndc for carbon budgeting and anaerobic biogeochemistry simulation

    Treesearch

    Jianbo Cui; Changsheng Li; Ge Sun; Carl Trettin

    2005-01-01

    This study reports the linkage between MIKE SHE and Wetland-DNDC for carbon dynamics and greenhouse gases (GHGs) emissions simulation in forested wetland.Wet1and-DNDC was modified by parameterizing management measures, refining anaerobic biogeochemical processes, and was linked to the hydrological model - MIKE SHE. As a preliminary application, we simulated the effect...

  14. Study of the Polarization Properties of the Crab Nebula and Pulsar with BATSE

    NASA Technical Reports Server (NTRS)

    Forrest, David J.; Vestrand, W. T.; McConnell, Mark

    1997-01-01

    Activities carried out under this proposal included: 1) development and refinements of Monte Carlo simulations of the atmospheric reflected albedo hard x-ray emissions, both unpolarized and polarized, 2) modeling and simulations of the off-axis response of the BATSE LAD detectors, and 3) comparison of our simulation results with numerous BATSE flare and cosmic burst data sets.

  15. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  16. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  17. 40 CFR 80.340 - What standards and requirements apply to refiners producing gasoline by blending blendstocks into...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to refiners producing gasoline by blending blendstocks into previously certified gasoline (PCG)? 80... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention... gasoline by blending blendstocks into previously certified gasoline (PCG)? (a) Any refiner who produces...

  18. 40 CFR 80.340 - What standards and requirements apply to refiners producing gasoline by blending blendstocks into...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... to refiners producing gasoline by blending blendstocks into previously certified gasoline (PCG)? 80... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention... gasoline by blending blendstocks into previously certified gasoline (PCG)? (a) Any refiner who produces...

  19. 40 CFR 80.340 - What standards and requirements apply to refiners producing gasoline by blending blendstocks into...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to refiners producing gasoline by blending blendstocks into previously certified gasoline (PCG)? 80... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention... gasoline by blending blendstocks into previously certified gasoline (PCG)? (a) Any refiner who produces...

  20. 40 CFR 80.340 - What standards and requirements apply to refiners producing gasoline by blending blendstocks into...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to refiners producing gasoline by blending blendstocks into previously certified gasoline (PCG)? 80... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention... gasoline by blending blendstocks into previously certified gasoline (PCG)? (a) Any refiner who produces...

  1. 40 CFR 80.340 - What standards and requirements apply to refiners producing gasoline by blending blendstocks into...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to refiners producing gasoline by blending blendstocks into previously certified gasoline (PCG)? 80... (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention... gasoline by blending blendstocks into previously certified gasoline (PCG)? (a) Any refiner who produces...

  2. Combining Heterogeneous Correlation Matrices: Simulation Analysis of Fixed-Effects Methods

    ERIC Educational Resources Information Center

    Hafdahl, Adam R.

    2008-01-01

    Monte Carlo studies of several fixed-effects methods for combining and comparing correlation matrices have shown that two refinements improve estimation and inference substantially. With rare exception, however, these simulations have involved homogeneous data analyzed using conditional meta-analytic procedures. The present study builds on…

  3. Space station Simulation Computer System (SCS) study for NASA/MSFC. Volume 3: Refined conceptual design report

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The results of the refined conceptual design phase (task 5) of the Simulation Computer System (SCS) study are reported. The SCS is the computational portion of the Payload Training Complex (PTC) providing simulation based training on payload operations of the Space Station Freedom (SSF). In task 4 of the SCS study, the range of architectures suitable for the SCS was explored. Identified system architectures, along with their relative advantages and disadvantages for SCS, were presented in the Conceptual Design Report. Six integrated designs-combining the most promising features from the architectural formulations-were additionally identified in the report. The six integrated designs were evaluated further to distinguish the more viable designs to be refined as conceptual designs. The three designs that were selected represent distinct approaches to achieving a capable and cost effective SCS configuration for the PTC. Here, the results of task 4 (input to this task) are briefly reviewed. Then, prior to describing individual conceptual designs, the PTC facility configuration and the SSF systems architecture that must be supported by the SCS are reviewed. Next, basic features of SCS implementation that have been incorporated into all selected SCS designs are considered. The details of the individual SCS designs are then presented before making a final comparison of the three designs.

  4. 3D Numerical Prediction of Gas-Solid Flow Behavior in CFB Risers for Geldart A and B Particles

    NASA Astrophysics Data System (ADS)

    Özel, A.; Fede, P.; Simonin, O.

    In this study, mono-disperse flows in squared risers conducted with A and B-type particles were simulated by Eulerian n-fluid 3D unsteady code. Two transport equations developed in the frame of kinetic theory of granular media supplemented by the interstitial fluid effect and the interaction with the turbulence (Balzer et al., 1996) are resolved to model the effect of velocity fluctuations and inter-particle collisions on the dispersed phase hydrodynamic. The studied flow geometries are three-dimensional vertical cold channels excluding cyclone, tampon and returning pipe of a typical circulating fluidized bed. For both type of particles, parametric studies were carried out to determine influences of boundary conditions, physical parameters and turbulence modeling. The grid dependency was analyzed with mesh refinement in horizontal and axial directions. For B-type particles, the results are in good qualitative agreement with the experiments and numerical predictions are slightly improved by the mesh refinement. On the contrary, the simulations with A-type particles show a less satisfactory agreement with available measurements and are highly sensitive to mesh refinement. Further studies are carried out to improve the A-type particles by modeling subgrid-scale effects in the frame of large-eddy simulation approach.

  5. Coarse Grained Model for Biological Simulations: Recent Refinements and Validation

    PubMed Central

    Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh

    2014-01-01

    Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439

  6. Processing of Lunar Soil Simulant for Space Exploration Applications

    NASA Technical Reports Server (NTRS)

    Sen, Subhayu; Ray, Chandra S.; Reddy, Ramana

    2005-01-01

    NASA's long-term vision for space exploration includes developing human habitats and conducting scientific investigations on planetary bodies, especially on Moon and Mars. To reduce the level of up-mass processing and utilization of planetary in-situ resources is recognized as an important element of this vision. Within this scope and context, we have undertaken a general effort aimed primarily at extracting and refining metals, developing glass, glass-ceramic, or traditional ceramic type materials using lunar soil simulants. In this paper we will present preliminary results on our effort on carbothermal reduction of oxides for elemental extraction and zone refining for obtaining high purity metals. In additions we will demonstrate the possibility of developing glasses from lunar soil simulant for fixing nuclear waste from potential nuclear power generators on planetary bodies. Compositional analysis, x-ray diffraction patterns and differential thermal analysis of processed samples will be presented.

  7. Software Prototyping: A Case Report of Refining User Requirements for a Health Information Exchange Dashboard.

    PubMed

    Nelson, Scott D; Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R

    2016-01-01

    Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system.

  8. The effects of elevated hearing thresholds on performance in a paintball simulation of individual dismounted combat.

    PubMed

    Sheffield, Benjamin; Brungart, Douglas; Tufts, Jennifer; Ness, James

    2017-01-01

    To examine the relationship between hearing acuity and operational performance in simulated dismounted combat. Individuals wearing hearing loss simulation systems competed in a paintball-based exercise where the objective was to be the last player remaining. Four hearing loss profiles were tested in each round (no hearing loss, mild, moderate and severe) and four rounds were played to make up a match. This allowed counterbalancing of simulated hearing loss across participants. Forty-three participants across two data collection sites (Fort Detrick, Maryland and the United States Military Academy, New York). All participants self-reported normal hearing except for two who reported mild hearing loss. Impaired hearing had a greater impact on the offensive capabilities of participants than it did on their "survival", likely due to the tendency for individuals with simulated impairment to adopt a more conservative behavioural strategy than those with normal hearing. These preliminary results provide valuable insights into the impact of impaired hearing on combat effectiveness, with implications for the development of improved auditory fitness-for-duty standards, the establishment of performance requirements for hearing protection technologies, and the refinement of strategies to train military personnel on how to use hearing protection in combat environments.

  9. Modeling, simulation, and analysis at Sandia National Laboratories for health care systems

    NASA Astrophysics Data System (ADS)

    Polito, Joseph

    1994-12-01

    Modeling, Simulation, and Analysis are special competencies of the Department of Energy (DOE) National Laboratories which have been developed and refined through years of national defense work. Today, many of these skills are being applied to the problem of understanding the performance of medical devices and treatments. At Sandia National Laboratories we are developing models at all three levels of health care delivery: (1) phenomenology models for Observation and Test, (2) model-based outcomes simulations for Diagnosis and Prescription, and (3) model-based design and control simulations for the Administration of Treatment. A sampling of specific applications include non-invasive sensors for blood glucose, ultrasonic scanning for development of prosthetics, automated breast cancer diagnosis, laser burn debridement, surgical staple deformation, minimally invasive control for administration of a photodynamic drug, and human-friendly decision support aids for computer-aided diagnosis. These and other projects are being performed at Sandia with support from the DOE and in cooperation with medical research centers and private companies. Our objective is to leverage government engineering, modeling, and simulation skills with the biotechnical expertise of the health care community to create a more knowledge-rich environment for decision making and treatment.

  10. Ventilation tube insertion simulation: a literature review and validity assessment of five training models.

    PubMed

    Mahalingam, S; Awad, Z; Tolley, N S; Khemani, S

    2016-08-01

    The objective of this study was to identify and investigate the face and content validity of ventilation tube insertion (VTI) training models described in the literature. A review of literature was carried out to identify articles describing VTI simulators. Feasible models were replicated and assessed by a group of experts. Postgraduate simulation centre. Experts were defined as surgeons who had performed at least 100 VTI on patients. Seventeen experts were participated ensuring sufficient statistical power for analysis. A standardised 18-item Likert-scale questionnaire was used. This addressed face validity (realism), global and task-specific content (suitability of the model for teaching) and curriculum recommendation. The search revealed eleven models, of which only five had associated validity data. Five models were found to be feasible to replicate. None of the tested models achieved face or global content validity. Only one model achieved task-specific validity, and hence, there was no agreement on curriculum recommendation. The quality of simulation models is moderate and there is room for improvement. There is a need for new models to be developed or existing ones to be refined in order to construct a more realistic training platform for VTI simulation. © 2015 John Wiley & Sons Ltd.

  11. Numerical relativity simulations of neutron star merger remnants using conservative mesh refinement

    NASA Astrophysics Data System (ADS)

    Dietrich, Tim; Bernuzzi, Sebastiano; Ujevic, Maximiliano; Brügmann, Bernd

    2015-06-01

    We study equal- and unequal-mass neutron star mergers by means of new numerical relativity simulations in which the general relativistic hydrodynamics solver employs an algorithm that guarantees mass conservation across the refinement levels of the computational mesh. We consider eight binary configurations with total mass M =2.7 M⊙, mass ratios q =1 and q =1.16 , four different equations of state (EOSs) and one configuration with a stiff EOS, M =2.5 M⊙ and q =1.5 , which is one of the largest mass ratios simulated in numerical relativity to date. We focus on the postmerger dynamics and study the merger remnant, the dynamical ejecta, and the postmerger gravitational wave spectrum. Although most of the merger remnants are a hypermassive neutron star collapsing to a black hole+disk system on dynamical time scales, stiff EOSs can eventually produce a stable massive neutron star. During the merger process and on very short time scales, about ˜10-3- 10-2M⊙ of material become unbound with kinetic energies ˜1050 erg . Ejecta are mostly emitted around the orbital plane and favored by large mass ratios and softer EOS. The postmerger wave spectrum is mainly characterized by the nonaxisymmetric oscillations of the remnant neutron star. The stiff EOS configuration consisting of a 1.5 M⊙ and a 1.0 M⊙ neutron star, simulated here for the first time, shows a rather peculiar dynamics. During merger the companion star is very deformed; about ˜0.03 M⊙ of the rest mass becomes unbound from the tidal tail due to the torque generated by the two-core inner structure. The merger remnant is a stable neutron star surrounded by a massive accretion disk of rest mass ˜0.3 M⊙. This and similar configurations might be particularly interesting for electromagnetic counterparts. Comparing results obtained with and without the conservative mesh refinement algorithm, we find that postmerger simulations can be affected by systematic errors if mass conservation is not enforced in the mesh refinement strategy. However, mass conservation also depends on grid details and on the artificial atmosphere setup; the latter are particularly significant in the computation of the dynamical ejecta.

  12. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  13. Geographic profiling applied to testing models of bumble-bee foraging.

    PubMed

    Raine, Nigel E; Rossmo, D Kim; Le Comber, Steven C

    2009-03-06

    Geographic profiling (GP) was originally developed as a statistical tool to help police forces prioritize lists of suspects in investigations of serial crimes. GP uses the location of related crime sites to make inferences about where the offender is most likely to live, and has been extremely successful in criminology. Here, we show how GP is applicable to experimental studies of animal foraging, using the bumble-bee Bombus terrestris. GP techniques enable us to simplify complex patterns of spatial data down to a small number of parameters (2-3) for rigorous hypothesis testing. Combining computer model simulations and experimental observation of foraging bumble-bees, we demonstrate that GP can be used to discriminate between foraging patterns resulting from (i) different hypothetical foraging algorithms and (ii) different food item (flower) densities. We also demonstrate that combining experimental and simulated data can be used to elucidate animal foraging strategies: specifically that the foraging patterns of real bumble-bees can be reliably discriminated from three out of nine hypothetical foraging algorithms. We suggest that experimental systems, like foraging bees, could be used to test and refine GP model predictions, and that GP offers a useful technique to analyse spatial animal behaviour data in both the laboratory and field.

  14. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  15. Application of the Local Grid Refinement package to an inset model simulating the interactions of lakes, wells, and shallow groundwater, northwestern Waukesha County, Wisconsin

    USGS Publications Warehouse

    Feinstein, D.T.; Dunning, C.P.; Juckem, P.F.; Hunt, R.J.

    2010-01-01

    Groundwater use from shallow, high-capacity wells is expected to increase across southeastern Wisconsin in the next decade (2010-2020), owing to residential and business growth and the need for shallow water to be blended with deeper water of lesser quality, containing, for example, excessive levels of radium. However, this increased pumping has the potential to affect surface-water features. A previously developed regional groundwater-flow model for southeastern Wisconsin was used as the starting point for a new model to characterize the hydrology of part of northwestern Waukesha County, with a particular focus on the relation between the shallow aquifer and several area lakes. An inset MODFLOW model was embedded in an updated version of the original regional model. Modifications made within the inset model domain include finer grid resolution; representation of Beaver, Pine, and North Lakes by use of the LAK3 package in MODFLOW; and representation of selected stream reaches with the SFR package. Additionally, the inset model is actively linked to the regional model by use of the recently released Local Grid Refinement package for MODFLOW-2005, which allows changes at the regional scale to propagate to the local scale and vice versa. The calibrated inset model was used to simulate the hydrologic system in the Chenequa area under various weather and pumping conditions. The simulated model results for base conditions show that groundwater is the largest inflow component for Beaver Lake (equal to 59 percent of total inflow). For Pine and North Lakes, it is still an important component (equal, respectively, to 16 and 5 percent of total inflow), but for both lakes it is less than the contribution from precipitation and surface water. Severe drought conditions (simulated in a rough way by reducing both precipitation and recharge rates for 5 years to two-thirds of base values) cause correspondingly severe reductions in lake stage and flows. The addition of a test well south of Chenequa at a pumping rate of 47 gal/min from a horizon approximately 200 feet below land surface has little effect on lake stages or flows even after 5 years of pumping. In these scenarios, the stage and the surface-water outflow from Pine Lake are simulated to decrease by only 0.03 feet and 3 percent, respectively, relative to base conditions. Likely explanations for these limited effects are the modest pumping rate simulated, the depth of the test well, and the large transmissivity of the unconsolidated aquifer, which allows the well to draw water from upstream along the bedrock valley and to capture inflow from the Bark River. However, if the pumping rate of the test well is assumed to increase to 200 gal/min, the decrease in simulated Pine Lake outflow is appreciably larger, dropping by 14 percent relative to base-flow conditions.

  16. Thoughts on the chimera method of simulation of three-dimensional viscous flow

    NASA Technical Reports Server (NTRS)

    Steger, Joseph L.

    1991-01-01

    The chimera overset grid is reviewed and discussed relative to other procedures for simulating flow about complex configurations. It is argued that while more refinement of the technique is needed, current schemes are competitive to unstructured grid schemes and should ultimately prove more useful.

  17. Theoretical studies of solar lasers and converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    1990-01-01

    The research described consisted of developing and refining the continuous flow laser model program including the creation of a working model. The mathematical development of a two pass amplifier for an iodine laser is summarized. A computer program for the amplifier's simulation is included with output from the simulation model.

  18. Verification of an analytic modeler for capillary pump loop thermal control systems

    NASA Technical Reports Server (NTRS)

    Schweickart, R. B.; Neiswanger, L.; Ku, J.

    1987-01-01

    A number of computer programs have been written to model two-phase heat transfer systems for space use. These programs support the design of thermal control systems and provide a method of predicting their performance in the wide range of thermal environments of space. Predicting the performance of one such system known as the capillary pump loop (CPL) is the intent of the CPL Modeler. By modeling two developed CPL systems and comparing the results with actual test data, the CPL Modeler has proven useful in simulating CPL operation. Results of the modeling effort are discussed, together with plans for refinements to the modeler.

  19. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  20. Auto-DR and Pre-cooling of Buildings at Tri-City Corporate Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Rongxin; Xu, Peng; Kiliccote, Sila

    2008-11-01

    Over the several past years, Lawrence Berkeley National Laboratory (LBNL) has conducted field tests for different pre-cooling strategies in different commercial buildings within California. The test results indicated that pre-cooling strategies were effective in reducing electric demand in these buildings during peak periods. This project studied how to optimize pre-cooling strategies for eleven buildings in the Tri-City Corporate Center, San Bernardino, California with the assistance of a building energy simulation tool -- the Demand Response Quick Assessment Tool (DRQAT) developed by LBNL's Demand Response Research Center funded by the California Energy Commission's Public Interest Energy Research (PIER) Program. From themore » simulation results of these eleven buildings, optimal pre-cooling and temperature reset strategies were developed. The study shows that after refining and calibrating initial models with measured data, the accuracy of the models can be greatly improved and the models can be used to predict load reductions for automated demand response (Auto-DR) events. This study summarizes the optimization experience of the procedure to develop and calibrate building models in DRQAT. In order to confirm the actual effect of demand response strategies, the simulation results were compared to the field test data. The results indicated that the optimal demand response strategies worked well for all buildings in the Tri-City Corporate Center. This study also compares DRQAT with other building energy simulation tools (eQUEST and BEST). The comparison indicate that eQUEST and BEST underestimate the actual demand shed of the pre-cooling strategies due to a flaw in DOE2's simulation engine for treating wall thermal mass. DRQAT is a more accurate tool in predicting thermal mass effects of DR events.« less

  1. Refinement of Ferrite Grain Size near the Ultrafine Range by Multipass, Thermomechanical Compression

    NASA Astrophysics Data System (ADS)

    Patra, S.; Neogy, S.; Kumar, Vinod; Chakrabarti, D.; Haldar, A.

    2012-11-01

    Plane-strain compression testing was carried out on a Nb-Ti-V microalloyed steel, in a GLEEBLE3500 simulator using a different amount of roughing, intermediate, and finishing deformation over the temperature range of 1373 K to 1073 K (1100 °C to 800 °C). A decrease in soaking temperature from 1473 K to 1273 K (1200 °C to 1000 °C) offered marginal refinement in the ferrite ( α) grain size from 7.8 to 6.6 μm. Heavy deformation using multiple passes between A e3 and A r3 with true strain of 0.8 to 1.2 effectively refined the α grain size (4.1 to 3.2 μm) close to the ultrafine size by dynamic-strain-induced austenite ( γ) → ferrite ( α) transformation (DSIT). The intensities of microstructural banding, pearlite fraction in the microstructure (13 pct), and fraction of the harmful "cube" texture component (5 pct) were reduced with the increase in finishing deformation. Simultaneously, the fractions of high-angle (>15 deg misorientation) boundaries (75 to 80 pct), beneficial gamma-fiber (ND//<111>) texture components, along with {332}<133> and {554}<225> components were increased. Grain refinement and the formation of small Fe3C particles (50- to 600-nm size) increased the hardness of the deformed samples (184 to 192 HV). For the same deformation temperature [1103 K (830 °C)], the difference in α-grain sizes obtained after single-pass (2.7 μm) and multipass compression (3.2 μm) can be explained in view of the static- and dynamic-strain-induced γ → α transformation, strain partitioning between γ and α, dynamic recovery and dynamic recrystallization of the deformed α, and α-grain growth during interpass intervals.

  2. Divide and conquer approach to quantum Hamiltonian simulation

    NASA Astrophysics Data System (ADS)

    Hadfield, Stuart; Papageorgiou, Anargyros

    2018-04-01

    We show a divide and conquer approach for simulating quantum mechanical systems on quantum computers. We can obtain fast simulation algorithms using Hamiltonian structure. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under very mild assumptions.

  3. Refinement of the experimental dynamic structure factor for liquid para-hydrogen and ortho-deuterium using semi-classical quantum simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kyle K. G., E-mail: kylesmith@utexas.edu; Rossky, Peter J., E-mail: peter.rossky@austin.utexas.edu; Poulsen, Jens Aage, E-mail: jens72@chem.gu.se

    The dynamic structure factor of liquid para-hydrogen and ortho-deuterium in corresponding thermodynamic states (T = 20.0 K, n = 21.24 nm{sup −3}) and (T = 23.0 K, n = 24.61 nm{sup −3}), respectively, has been computed by both the Feynman-Kleinert linearized path-integral (FK-LPI) and Ring-Polymer Molecular Dynamics (RPMD) methods and compared with Inelastic X Ray Scattering spectra. The combined use of computational and experimental methods enabled us to reduce experimental uncertainties in the determination of the true sample spectrum. Furthermore, the refined experimental spectrum of para-hydrogen and ortho-deuterium is consistently reproduced by both FK-LPI and RPMD results at momentum transfers lower than 12.8 nm{sup −1}.more » At larger momentum transfers the FK-LPI results agree with experiment much better for ortho-deuterium than for para-hydrogen. More specifically we found that for k ∼ 20.0 nm{sup −1} para-hydrogen provides a test case for improved approximations to quantum dynamics.« less

  4. A senstitivity study of the ground hydrologic model using data generated by an atmospheric general circulation model. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sun, S. F.

    1985-01-01

    The Ground Hydrologic Model (GHM) developed for use in an atmospheric general circulation model (GCM) has been refined. A series of sensitivity studies of the new version of the GHM were conducted for the purpose of understanding the role played by various physical parameters in the GHM. The following refinements have been made: (1) the GHM is coupled directly with the planetary boundary layer (PBL); (2) a bulk vegetation layer is added with a more realistic large-scale parameterization; and (3) the infiltration rate is modified. This version GHM has been tested using input data derived from a GCM simulation run for eight North America regions for 45 days. The results are compared with those of the resident GHM in the GCM. The daily average of grid surface temperatures from both models agree reasonably well in phase and magnitude. However, large difference exists in one or two regions on some days. The daily average evapotranspiration is in general 10 to 30% less than the corresponding value given by the resident GHM.

  5. Properties of Base Stocks Obtained from Used Engine Oils by Acid/Clay Re-refining (Proprietes des Stocks de Base Obtenus par Regeneration des Huiles a Moteur Usees par le Procede de Traitement a l’Acide et a la Terre),

    DTIC Science & Technology

    1980-09-01

    Research Conseil national Council Canada de recherches Canada LEY EL < PROPERTIES OF BASE STOCKS OBTAINED FROM USED ENGINE OILS BY ACID /CLAY RE-REFINING DTIC...MECHANICAL ENGINEERING REPORT Canad NC MP75 NRC NO. 18719 PROPERTIES OF BASE STOCKS OBTAINED FROM USED ENGINE OILS BY ACID /CLAY RE-REFINING (PROPRIETES...refined Base Stock ..................................... 10 3 Physical Test Data of Acid /Clay Process - Re-refined Base Stock Oils ............ 11 4

  6. Perceptually relevant parameters for virtual listening simulation of small room acoustics

    PubMed Central

    Zahorik, Pavel

    2009-01-01

    Various physical aspects of room-acoustic simulation techniques have been extensively studied and refined, yet the perceptual attributes of the simulations have received relatively little attention. Here a method of evaluating the perceptual similarity between rooms is described and tested using 15 small-room simulations based on binaural room impulse responses (BRIRs) either measured from a real room or estimated using simple geometrical acoustic modeling techniques. Room size and surface absorption properties were varied, along with aspects of the virtual simulation including the use of individualized head-related transfer function (HRTF) measurements for spatial rendering. Although differences between BRIRs were evident in a variety of physical parameters, a multidimensional scaling analysis revealed that when at-the-ear signal levels were held constant, the rooms differed along just two perceptual dimensions: one related to reverberation time (T60) and one related to interaural coherence (IACC). Modeled rooms were found to differ from measured rooms in this perceptual space, but the differences were relatively small and should be easily correctable through adjustment of T60 and IACC in the model outputs. Results further suggest that spatial rendering using individualized HRTFs offers little benefit over nonindividualized HRTF rendering for room simulation applications where source direction is fixed. PMID:19640043

  7. JT9D ceramic outer air seal system refinement program

    NASA Technical Reports Server (NTRS)

    Gaffin, W. O.

    1982-01-01

    The abradability and durability characteristics of the plasma sprayed system were improved by refinement and optimization of the plasma spray process and the metal substrate design. The acceptability of the final seal system for engine testing was demonstrated by an extensive rig test program which included thermal shock tolerance, thermal gradient, thermal cycle, erosion, and abradability tests. An interim seal system design was also subjected to 2500 endurance test cycles in a JT9D-7 engine.

  8. Virtual reality: emerging role of simulation training in vascular access.

    PubMed

    Davidson, Ingemar J A; Lok, Charmaine; Dolmatch, Bart; Gallieni, Maurizio; Nolen, Billy; Pittiruti, Mauro; Ross, John; Slakey, Douglas

    2012-11-01

    Evolving new technologies in vascular access mandate increased attention to patient safety; an often overlooked yet valuable training tool is simulation. For the end-stage renal disease patient, simulation tools are effective for all aspects of creating access for peritoneal dialysis and hemodialysis. Based on aviation principles, known as crew resource management, we place equal emphasis on team training as individual training to improve interactions between team members and systems, cumulating in improved safety. Simulation allows for environmental control and standardized procedures, letting the trainee practice and correct mistakes without harm to patients, compared with traditional patient-based training. Vascular access simulators range from suture devices, to pressurized tunneled conduits for needle cannulation, to computer-based interventional simulators. Simulation training includes simulated case learning, root cause analysis of adverse outcomes, and continual update and refinement of concepts. Implementation of effective human to complex systems interaction in end-stage renal disease patients involves a change in institutional culture. Three concepts discussed in this article are as follows: (1) the need for user-friendly systems and technology to enhance performance, (2) the necessity for members to both train and work together as a team, and (3) the team assigned to use the system must test and practice it to a proficient level before safely using the system on patients. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. A Taxonomy of Delivery and Documentation Deviations During Delivery of High-Fidelity Simulations.

    PubMed

    McIvor, William R; Banerjee, Arna; Boulet, John R; Bekhuis, Tanja; Tseytlin, Eugene; Torsher, Laurence; DeMaria, Samuel; Rask, John P; Shotwell, Matthew S; Burden, Amanda; Cooper, Jeffrey B; Gaba, David M; Levine, Adam; Park, Christine; Sinz, Elizabeth; Steadman, Randolph H; Weinger, Matthew B

    2017-02-01

    We developed a taxonomy of simulation delivery and documentation deviations noted during a multicenter, high-fidelity simulation trial that was conducted to assess practicing physicians' performance. Eight simulation centers sought to implement standardized scenarios over 2 years. Rules, guidelines, and detailed scenario scripts were established to facilitate reproducible scenario delivery; however, pilot trials revealed deviations from those rubrics. A taxonomy with hierarchically arranged terms that define a lack of standardization of simulation scenario delivery was then created to aid educators and researchers in assessing and describing their ability to reproducibly conduct simulations. Thirty-six types of delivery or documentation deviations were identified from the scenario scripts and study rules. Using a Delphi technique and open card sorting, simulation experts formulated a taxonomy of high-fidelity simulation execution and documentation deviations. The taxonomy was iteratively refined and then tested by 2 investigators not involved with its development. The taxonomy has 2 main classes, simulation center deviation and participant deviation, which are further subdivided into as many as 6 subclasses. Inter-rater classification agreement using the taxonomy was 74% or greater for each of the 7 levels of its hierarchy. Cohen kappa calculations confirmed substantial agreement beyond that expected by chance. All deviations were classified within the taxonomy. This is a useful taxonomy that standardizes terms for simulation delivery and documentation deviations, facilitates quality assurance in scenario delivery, and enables quantification of the impact of deviations upon simulation-based performance assessment.

  10. Cognitive diagnosis modelling incorporating item response times.

    PubMed

    Zhan, Peida; Jiao, Hong; Liao, Dandan

    2018-05-01

    To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.

  11. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    PubMed

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  12. Deorbitalization strategies for meta-generalized-gradient-approximation exchange-correlation functionals

    NASA Astrophysics Data System (ADS)

    Mejia-Rodriguez, Daniel; Trickey, S. B.

    2017-11-01

    We explore the simplification of widely used meta-generalized-gradient approximation (mGGA) exchange-correlation functionals to the Laplacian level of refinement by use of approximate kinetic-energy density functionals (KEDFs). Such deorbitalization is motivated by the prospect of reducing computational cost while recovering a strictly Kohn-Sham local potential framework (rather than the usual generalized Kohn-Sham treatment of mGGAs). A KEDF that has been rather successful in solid simulations proves to be inadequate for deorbitalization, but we produce other forms which, with parametrization to Kohn-Sham results (not experimental data) on a small training set, yield rather good results on standard molecular test sets when used to deorbitalize the meta-GGA made very simple, Tao-Perdew-Staroverov-Scuseria, and strongly constrained and appropriately normed functionals. We also study the difference between high-fidelity and best-performing deorbitalizations and discuss possible implications for use in ab initio molecular dynamics simulations of complicated condensed phase systems.

  13. Quasi steady-state aerodynamic model development for race vehicle simulations

    NASA Astrophysics Data System (ADS)

    Mohrfeld-Halterman, J. A.; Uddin, M.

    2016-01-01

    Presented in this paper is a procedure to develop a high fidelity quasi steady-state aerodynamic model for use in race car vehicle dynamic simulations. Developed to fit quasi steady-state wind tunnel data, the aerodynamic model is regressed against three independent variables: front ground clearance, rear ride height, and yaw angle. An initial dual range model is presented and then further refined to reduce the model complexity while maintaining a high level of predictive accuracy. The model complexity reduction decreases the required amount of wind tunnel data thereby reducing wind tunnel testing time and cost. The quasi steady-state aerodynamic model for the pitch moment degree of freedom is systematically developed in this paper. This same procedure can be extended to the other five aerodynamic degrees of freedom to develop a complete six degree of freedom quasi steady-state aerodynamic model for any vehicle.

  14. Neoclassical orbit calculations with a full-f code for tokamak edge plasmas

    NASA Astrophysics Data System (ADS)

    Rognlien, T. D.; Cohen, R. H.; Dorr, M.; Hittinger, J.; Xu, X. Q.; Collela, P.; Martin, D.

    2008-11-01

    Ion distribution function modifications are considered for the case of neoclassical orbit widths comparable to plasma radial-gradient scale-lengths. Implementation of proper boundary conditions at divertor plates in the continuum TEMPEST code, including the effect of drifts in determining the direction of total flow, enables such calculations in single-null divertor geometry, with and without an electrostatic potential. The resultant poloidal asymmetries in densities, temperatures, and flows are discussed. For long-time simulations, a slow numerical instability develops, even in simplified (circular) geometry with no endloss, which aids identification of the mixed treatment of parallel and radial convection terms as the cause. The new Edge Simulation Laboratory code, expected to be operational, has algorithmic refinements that should address the instability. We will present any available results from the new code on this problem as well as geodesic acoustic mode tests.

  15. Formation mechanism and control of MgO·Al2O3 inclusions in non-oriented silicon steel

    NASA Astrophysics Data System (ADS)

    Sun, Yan-hui; Zeng, Ya-nan; Xu, Rui; Cai, Kai-ke

    2014-11-01

    On the basis of the practical production of non-oriented silicon steel, the formation of MgO·Al2O3 inclusions was analyzed in the process of "basic oxygen furnace (BOF) → RH → compact strip production (CSP)". The thermodynamic and kinetic conditions of the formation of MgO·Al2O3 inclusions were discussed, and the behavior of slag entrapment in molten steel during RH refining was simulated by computational fluid dynamics (CFD) software. The results showed that the MgO/Al2O3 mass ratio was in the range from 0.005 to 0.017 and that MgO·Al2O3 inclusions were not observed before the RH refining process. In contrast, the MgO/Al2O3 mass ratio was in the range from 0.30 to 0.50, and the percentage of MgO·Al2O3 spinel inclusions reached 58.4% of the total inclusions after the RH refining process. The compositions of the slag were similar to those of the inclusions; furthermore, the critical velocity of slag entrapment was calculated to be 0.45 m·s-1 at an argon flow rate of 698 L·min-1, as simulated using CFD software. When the test steel was in equilibrium with the slag, [Mg] was 0.00024wt%-0.00028wt% and [Al]s was 0.31wt%-0.37wt%; these concentrations were theoretically calculated to fall within the MgO·Al2O3 formation zone, thereby leading to the formation of MgO·Al2O3 inclusions in the steel. Thus, the formation of MgO·Al2O3 inclusions would be inhibited by reducing the quantity of slag entrapment, controlling the roughing slag during casting, and controlling the composition of the slag and the MgO content in the ladle refractory.

  16. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  17. Effects of crystal refining on wear behaviors and mechanical properties of lithium disilicate glass-ceramics.

    PubMed

    Zhang, Zhenzhen; Guo, Jiawen; Sun, Yali; Tian, Beimin; Zheng, Xiaojuan; Zhou, Ming; He, Lin; Zhang, Shaofeng

    2018-05-01

    The purpose of this study is to improve wear resistance and mechanical properties of lithium disilicate glass-ceramics by refining their crystal sizes. After lithium disilicate glass-ceramics (LD) were melted to form precursory glass blocks, bar (N = 40, n = 10) and plate (N = 32, n = 8) specimens were prepared. According to the differential scanning calorimetry (DSC) of precursory glass, specimens G1-G4 were designed to form lithium disilicate glass-ceramics with different crystal sizes using a two-step thermal treatment. In the meantime, heat-pressed lithium disilicate glass-ceramics (GC-P) and original ingots (GC-O) were used as control groups. Glass-ceramics were characterized using X-ray diffraction (XRD) and were tested using flexural strength test, nanoindentation test and toughness measurements. The plate specimens were dynamically loaded in a chewing simulator with 350 N up to 2.4 × 10 6 loading cycles. The wear analysis of glass-ceramics was performed using a 3D profilometer after every 300,000 wear cycles. Wear morphologies and microstructures were analyzed by scanning electron microscopy (SEM). One-way analysis of variance (ANOVA) was used to analyze the data. Multiple pairwise comparisons of means were performed by Tukey's post-hoc test. Materials with different crystal sizes (p < 0.05) exhibited different properties. Specifically, G3 with medium-sized crystals presented the highest flexural strength, hardness, elastic modulus and fracture toughness. G1 and G2 with small-sized crystals showed lower flexural strength, whereas G4, GC-P, and GC-O with large-sized crystals exhibited lower hardness and elastic modulus. The wear behaviors of all six groups showed running-in wear stage and steady wear stage. G3 showed the best wear resistance while GC-P and GC-O exhibited the highest wear volume loss. After crystal refining, lithium disilicate glass-ceramic with medium-sized crystals showed the highest wear resistance and mechanical properties. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Experimental Test Rig for Optimal Control of Flexible Space Robotic Arms

    DTIC Science & Technology

    2016-12-01

    was used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link...used to refine the test bed design and the experimental workflow. Three concepts incorporated various strategies to design a robust flexible link... designed to perform the experimentation . The first and second concepts use traditional elastic springs in varying configurations while a third uses a

  19. Development and Refinement of Reading and Mathematics Tests for Grades 2 and 5. Beginning Teacher Evaluation Study. Technical Report Series. Technical Report III-1. Continuation of Phase III A.

    ERIC Educational Resources Information Center

    Filby, Nikola N.; Dishaw, Marilyn

    Achievement tests that are maximally sensitive to effective instruction in reading and mathematics for grades 2 and 5 were developed and refined. Important considerations regarding the tests' validity were: its coverage of instructional content (opportunity to learn), and its reactivity to instruction. Student ability must be minimally related to…

  20. Materials used to simulate physical properties of human skin.

    PubMed

    Dąbrowska, A K; Rotaru, G-M; Derler, S; Spano, F; Camenzind, M; Annaheim, S; Stämpfli, R; Schmid, M; Rossi, R M

    2016-02-01

    For many applications in research, material development and testing, physical skin models are preferable to the use of human skin, because more reliable and reproducible results can be obtained. This article gives an overview of materials applied to model physical properties of human skin to encourage multidisciplinary approaches for more realistic testing and improved understanding of skin-material interactions. The literature databases Web of Science, PubMed and Google Scholar were searched using the terms 'skin model', 'skin phantom', 'skin equivalent', 'synthetic skin', 'skin substitute', 'artificial skin', 'skin replica', and 'skin model substrate.' Articles addressing material developments or measurements that include the replication of skin properties or behaviour were analysed. It was found that the most common materials used to simulate skin are liquid suspensions, gelatinous substances, elastomers, epoxy resins, metals and textiles. Nano- and micro-fillers can be incorporated in the skin models to tune their physical properties. While numerous physical skin models have been reported, most developments are research field-specific and based on trial-and-error methods. As the complexity of advanced measurement techniques increases, new interdisciplinary approaches are needed in future to achieve refined models which realistically simulate multiple properties of human skin. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Evolving a Puncture Black Hole with Fixed Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin

    2004-01-01

    We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.

  2. Progress in Computational Simulation of Earthquakes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  3. High performance ultrasonic field simulation on complex geometries

    NASA Astrophysics Data System (ADS)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  4. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  5. Models, Simulations, and Games: A Survey.

    ERIC Educational Resources Information Center

    Shubik, Martin; Brewer, Garry D.

    A Rand evaluation of activity and products of gaming, model-building, and simulation carried out under the auspices of the Defense Advanced Research Projects Agency aimed not only to assess the usefulness of gaming in military-political policymaking, but also to contribute to the definition of common standards and the refinement of objectives for…

  6. Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    2004-01-01

    The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…

  7. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  8. Numerical Study of Richtmyer-Meshkov Instability with Re-Shock

    NASA Astrophysics Data System (ADS)

    Wong, Man Long; Livescu, Daniel; Lele, Sanjiva

    2017-11-01

    The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.

  9. PDB_REDO: constructive validation, more than just looking for errors.

    PubMed

    Joosten, Robbie P; Joosten, Krista; Murshudov, Garib N; Perrakis, Anastassis

    2012-04-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R(free) and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise `static' structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets.

  10. PDB_REDO: constructive validation, more than just looking for errors

    PubMed Central

    Joosten, Robbie P.; Joosten, Krista; Murshudov, Garib N.; Perrakis, Anastassis

    2012-01-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R free and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise ‘static’ structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets. PMID:22505269

  11. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  12. Analytical Methodology Used To Assess/Refine Observatory Thermal Vacuum Test Conditions For the Landsat 8 Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Fantano, Louis

    2015-01-01

    Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.

  13. Simulation of the shallow groundwater-flow system near the Hayward Airport, Sawyer County, Wisconsin

    USGS Publications Warehouse

    Hunt, Randall J.; Juckem, Paul F.; Dunning, Charles P.

    2010-01-01

    There are concerns that removal and trimming of vegetation during expansion of the Hayward Airport in Sawyer County, Wisconsin, could appreciably change the character of a nearby cold-water stream and its adjacent environs. In cooperation with the Wisconsin Department of Transportation, a two-dimensional, steady-state groundwater-flow model of the shallow groundwater-flow system near the Hayward Airport was refined from a regional model of the area. The parameter-estimation code PEST was used to obtain a best fit of the model to additional field data collected in February 2007 as part of this study. The additional data were collected during an extended period of low runoff and consisted of water levels and streamflows near the Hayward Airport. Refinements to the regional model included one additional hydraulic-conductivity zone for the airport area, and three additional parameters for streambed resistance in a northern tributary to the Namekagon River and in the main stem of the Namekagon River. In the refined Hayward Airport area model, the calibrated hydraulic conductivity was 11.2 feet per day, which is within the 58.2 to 7.9 feet per day range reported for the regional glacial and sandstone aquifer, and is consistent with a silty soil texture for the area. The calibrated refined model had a best fit of 8.6 days for the streambed resistance of the Namekagon River and between 0.6 and 1.6 days for the northern tributary stream. The previously reported regional groundwater-recharge rate of 10.1 inches per year was adjusted during calibration of the refined model in order to match streamflows measured during the period of extended low runoff; this resulted in an optimal groundwater-recharge rate of 7.1 inches per year during this period. The refined model was then used to simulate the capture zone of the northern tributary to the Namekagon River.

  14. Prediction of protein loop conformations using multiscale modeling methods with physical energy scoring functions.

    PubMed

    Olson, Mark A; Feig, Michael; Brooks, Charles L

    2008-04-15

    This article examines ab initio methods for the prediction of protein loops by a computational strategy of multiscale conformational sampling and physical energy scoring functions. Our approach consists of initial sampling of loop conformations from lattice-based low-resolution models followed by refinement using all-atom simulations. To allow enhanced conformational sampling, the replica exchange method was implemented. Physical energy functions based on CHARMM19 and CHARMM22 parameterizations with generalized Born (GB) solvent models were applied in scoring loop conformations extracted from the lattice simulations and, in the case of all-atom simulations, the ensemble of conformations were generated and scored with these models. Predictions are reported for 25 loop segments, each eight residues long and taken from a diverse set of 22 protein structures. We find that the simulations generally sampled conformations with low global root-mean-square-deviation (RMSD) for loop backbone coordinates from the known structures, whereas clustering conformations in RMSD space and scoring detected less favorable loop structures. Specifically, the lattice simulations sampled basins that exhibited an average global RMSD of 2.21 +/- 1.42 A, whereas clustering and scoring the loop conformations determined an RMSD of 3.72 +/- 1.91 A. Using CHARMM19/GB to refine the lattice conformations improved the sampling RMSD to 1.57 +/- 0.98 A and detection to 2.58 +/- 1.48 A. We found that further improvement could be gained from extending the upper temperature in the all-atom refinement from 400 to 800 K, where the results typically yield a reduction of approximately 1 A or greater in the RMSD of the detected loop. Overall, CHARMM19 with a simple pairwise GB solvent model is more efficient at sampling low-RMSD loop basins than CHARMM22 with a higher-resolution modified analytical GB model; however, the latter simulation method provides a more accurate description of the all-atom energy surface, yet demands a much greater computational cost. (c) 2007 Wiley Periodicals, Inc.

  15. The Mobile Agents Integrated Field Test: Mars Desert Research Station April 2003

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ron

    2003-01-01

    The Mobile Agents model-based, distributed architecture, which integrates diverse components in a system for lunar and planetary surface operations, was extensively tested in a two-week field "technology retreat" at the Mars Society s Desert Research Station (MDRS) during April 2003. More than twenty scientists and engineers from three NASA centers and two universities refined and tested the system through a series of incremental scenarios. Agent software, implemented in runtime Brahms, processed GPS, health data, and voice commands-monitoring, controlling and logging science data throughout simulated EVAs with two geologists. Predefined EVA plans, modified on the fly by voice command, enabled the Mobile Agents system to provide navigation and timing advice. Communications were maintained over five wireless nodes distributed over hills and into canyons for 5 km; data, including photographs and status was transmitted automatically to the desktop at mission control in Houston. This paper describes the system configurations, communication protocols, scenarios, and test results.

  16. Simulation in Metallurgical Processing: Recent Developments and Future Perspectives

    NASA Astrophysics Data System (ADS)

    Ludwig, Andreas; Wu, Menghuai; Kharicha, Abdellah

    2016-08-01

    This article briefly addresses the most important topics concerning numerical simulation of metallurgical processes, namely, multiphase issues (particle and bubble motion and flotation/sedimentation of equiaxed crystals during solidification), multiphysics issues (electromagnetic stirring, electro-slag remelting, Cu-electro-refining, fluid-structure interaction, and mushy zone deformation), process simulations on graphical processing units, integrated computational materials engineering, and automatic optimization via simulation. The present state-of-the-art as well as requirements for future developments are presented and briefly discussed.

  17. Implementation of Hydrodynamic Simulation Code in Shock Experiment Design for Alkali Metals

    NASA Astrophysics Data System (ADS)

    Coleman, A. L.; Briggs, R.; Gorman, M. G.; Ali, S.; Lazicki, A.; Swift, D. C.; Stubley, P. G.; McBride, E. E.; Collins, G.; Wark, J. S.; McMahon, M. I.

    2017-10-01

    Shock compression techniques enable the investigation of extreme P-T states. In order to probe off-Hugoniot regions of P-T space, target makeup and laser pulse parameters must be carefully designed. HYADES is a hydrodynamic simulation code which has been successfully utilised to simulate shock compression events and refine the experimental parameters required in order to explore new P-T states in alkali metals. Here we describe simulations and experiments on potassium, along with the techniques required to access off-Hugoniot states.

  18. Grid refinement in Cartesian coordinates for groundwater flow models using the divergence theorem and Taylor's series.

    PubMed

    Mansour, M M; Spink, A E F

    2013-01-01

    Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.

  19. Optimization of Melt Treatment for Austenitic Steel Grain Refinement

    NASA Astrophysics Data System (ADS)

    Lekakh, Simon N.; Ge, Jun; Richards, Von; O'Malley, Ron; TerBush, Jessica R.

    2017-02-01

    Refinement of the as-cast grain structure of austenitic steels requires the presence of active solid nuclei during solidification. These nuclei can be formed in situ in the liquid alloy by promoting reactions between transition metals (Ti, Zr, Nb, and Hf) and metalloid elements (C, S, O, and N) dissolved in the melt. Using thermodynamic simulations, experiments were designed to evaluate the effectiveness of a predicted sequence of reactions targeted to form precipitates that could act as active nuclei for grain refinement in austenitic steel castings. Melt additions performed to promote the sequential precipitation of titanium nitride (TiN) onto previously formed spinel (Al2MgO4) inclusions in the melt resulted in a significant refinement of the as-cast grain structure in heavy section Cr-Ni-Mo stainless steel castings. A refined as-cast structure consisting of an inner fine-equiaxed grain structure and outer columnar dendrite zone structure of limited length was achieved in experimental castings. The sequential of precipitation of TiN onto Al2MgO4 was confirmed using automated SEM/EDX and TEM analyses.

  20. Efficient parallel seismic simulations including topography and 3-D material heterogeneities on locally refined composite grids

    NASA Astrophysics Data System (ADS)

    Petersson, Anders; Rodgers, Arthur

    2010-05-01

    The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.

  1. Dynamic Model of Basic Oxygen Steelmaking Process Based on Multi-zone Reaction Kinetics: Model Derivation and Validation

    NASA Astrophysics Data System (ADS)

    Rout, Bapin Kumar; Brooks, Geoff; Rhamdhani, M. Akbar; Li, Zushu; Schrama, Frank N. H.; Sun, Jianjun

    2018-04-01

    A multi-zone kinetic model coupled with a dynamic slag generation model was developed for the simulation of hot metal and slag composition during the basic oxygen furnace (BOF) operation. The three reaction zones (i) jet impact zone, (ii) slag-bulk metal zone, (iii) slag-metal-gas emulsion zone were considered for the calculation of overall refining kinetics. In the rate equations, the transient rate parameters were mathematically described as a function of process variables. A micro and macroscopic rate calculation methodology (micro-kinetics and macro-kinetics) were developed to estimate the total refining contributed by the recirculating metal droplets through the slag-metal emulsion zone. The micro-kinetics involves developing the rate equation for individual droplets in the emulsion. The mathematical models for the size distribution of initial droplets, kinetics of simultaneous refining of elements, the residence time in the emulsion, and dynamic interfacial area change were established in the micro-kinetic model. In the macro-kinetics calculation, a droplet generation model was employed and the total amount of refining by emulsion was calculated by summing the refining from the entire population of returning droplets. A dynamic FetO generation model based on oxygen mass balance was developed and coupled with the multi-zone kinetic model. The effect of post-combustion on the evolution of slag and metal composition was investigated. The model was applied to a 200-ton top blowing converter and the simulated value of metal and slag was found to be in good agreement with the measured data. The post-combustion ratio was found to be an important factor in controlling FetO content in the slag and the kinetics of Mn and P in a BOF process.

  2. Capturing the Energy Absorbing Mechanisms of Composite Structures under Crash Loading

    NASA Astrophysics Data System (ADS)

    Wade, Bonnie

    As fiber reinforced composite material systems become increasingly utilized in primary aircraft and automotive structures, the need to understand their contribution to the crashworthiness of the structure is of great interest to meet safety certification requirements. The energy absorbing behavior of a composite structure, however, is not easily predicted due to the great complexity of the failure mechanisms that occur within the material. Challenges arise both in the experimental characterization and in the numerical modeling of the material/structure combination. At present, there is no standardized test method to characterize the energy absorbing capability of composite materials to aide crashworthy structural design. In addition, although many commercial finite element analysis codes exist and offer a means to simulate composite failure initiation and propagation, these models are still under development and refinement. As more metallic structures are replaced by composite structures, the need for both experimental guidelines to characterize the energy absorbing capability of a composite structure, as well as guidelines for using numerical tools to simulate composite materials in crash conditions has become a critical matter. This body of research addresses both the experimental characterization of the energy absorption mechanisms occurring in composite materials during crushing, as well as the numerical simulation of composite materials undergoing crushing. In the experimental investigation, the specific energy absorption (SEA) of a composite material system is measured using a variety of test element geometries, such as corrugated plates and tubes. Results from several crush experiments reveal that SEA is not a constant material property for laminated composites, and varies significantly with the geometry of the test specimen used. The variation of SEA measured for a single material system requires that crush test data must be generated for a range of different test geometries in order to define the range of its energy absorption capability. Further investigation from the crush tests has led to the development of a direct link between geometric features of the crush specimen and its resulting SEA. Through micrographic analysis, distinct failure modes are shown to be guided by the geometry of the specimen, and subsequently are shown to directly influence energy absorption. A new relationship between geometry, failure mode, and SEA has been developed. This relationship has allowed for the reduction of the element-level crush testing requirement to characterize the composite material energy absorption capability. In the numerical investigation, the LS-DYNA composite material model MAT54 is selected for its suitability to model composite materials beyond failure determination, as required by crush simulation, and its capability to remain within the scope of ultimately using this model for large-scale crash simulation. As a result of this research, this model has been thoroughly investigated in depth for its capacity to simulate composite materials in crush, and results from several simulations of the element-level crush experiments are presented. A modeling strategy has been developed to use MAT54 for crush simulation which involves using the experimental data collected from the coupon- and element-level crush tests to directly calibrate the crush damage parameter in MAT54 such that it may be used in higher-level simulations. In addition, the source code of the material model is modified to improve upon its capability. The modifications include improving the elastic definition such that the elastic response to multi-axial load cases can be accurately portrayed simultaneously in each element, which is a capability not present in other composite material models. Modifications made to the failure determination and post-failure model have newly emphasized the post-failure stress degradation scheme rather than the failure criterion which is traditionally considered the most important composite material model definition for crush simulation. The modification efforts have also validated the use of the MAT54 failure criterion and post-failure model for crash modeling when its capabilities and limitations are well understood, and for this reason guidelines for using MAT54 for composite crush simulation are presented. This research has effectively (a) developed and demonstrated a procedure that defines a set of experimental crush results that characterize the energy absorption capability of a composite material system, (b) used the experimental results in the development and refinement of a composite material model for crush simulation, (c) explored modifying the material model to improve its use in crush modeling, and (d) provided experimental and modeling guidelines for composite structures under crush at the element-level in the scope of the Building Block Approach.

  3. High-speed GPU-based finite element simulations for NDT

    NASA Astrophysics Data System (ADS)

    Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.

    2015-03-01

    The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.

  4. Fully Resolved Simulations of 3D Printing

    NASA Astrophysics Data System (ADS)

    Tryggvason, Gretar; Xia, Huanxiong; Lu, Jiacai

    2017-11-01

    Numerical simulations of Fused Deposition Modeling (FDM) (or Fused Filament Fabrication) where a filament of hot, viscous polymer is deposited to ``print'' a three-dimensional object, layer by layer, are presented. A finite volume/front tracking method is used to follow the injection, cooling, solidification and shrinking of the filament. The injection of the hot melt is modeled using a volume source, combined with a nozzle, modeled as an immersed boundary, that follows a prescribed trajectory. The viscosity of the melt depends on the temperature and the shear rate and the polymer becomes immobile as its viscosity increases. As the polymer solidifies, the stress is found by assuming a hyperelastic constitutive equation. The method is described and its accuracy and convergence properties are tested by grid refinement studies for a simple setup involving two short filaments, one on top of the other. The effect of the various injection parameters, such as nozzle velocity and injection velocity are briefly examined and the applicability of the approach to simulate the construction of simple multilayer objects is shown. The role of fully resolved simulations for additive manufacturing and their use for novel processes and as the ``ground truth'' for reduced order models is discussed.

  5. Dynamically adaptive data-driven simulation of extreme hydrological flows

    NASA Astrophysics Data System (ADS)

    Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint

    2018-02-01

    Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.

  6. Simulation of ground-water flow in the Intermediate and Floridan aquifer systems in Peninsular Florida

    USGS Publications Warehouse

    Sepúlveda, Nicasio

    2002-01-01

    A numerical model of the intermediate and Floridan aquifer systems in peninsular Florida was used to (1) test and refine the conceptual understanding of the regional ground-water flow system; (2) develop a data base to support subregional ground-water flow modeling; and (3) evaluate effects of projected 2020 ground-water withdrawals on ground-water levels. The four-layer model was based on the computer code MODFLOW-96, developed by the U.S. Geological Survey. The top layer consists of specified-head cells simulating the surficial aquifer system as a source-sink layer. The second layer simulates the intermediate aquifer system in southwest Florida and the intermediate confining unit where it is present. The third and fourth layers simulate the Upper and Lower Floridan aquifers, respectively. Steady-state ground-water flow conditions were approximated for time-averaged hydrologic conditions from August 1993 through July 1994 (1993-94). This period was selected based on data from Upper Floridan a quifer wells equipped with continuous water-level recorders. The grid used for the ground-water flow model was uniform and composed of square 5,000-foot cells, with 210 columns and 300 rows.

  7. Can Asteroid Airbursts Cause Dangerous Tsunami?.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark B.

    I have performed a series of high-resolution hydrocode simulations to generate “source functions” for tsunami simulations as part of a proof-of-principle effort to determine whether or not the downward momentum from an asteroid airburst can couple energy into a dangerous tsunami in deep water. My new CTH simulations show enhanced momentum multiplication relative to a nuclear explosion of the same yield. Extensive sensitivity and convergence analyses demonstrate that results are robust and repeatable for simulations with sufficiently high resolution using adaptive mesh refinement. I have provided surface overpressure and wind velocity fields to tsunami modelers to use as time-dependent boundarymore » conditions and to test the hypothesis that this mechanism can enhance the strength of the resulting shallow-water wave. The enhanced momentum result suggests that coupling from an over-water plume-forming airburst could be a more efficient tsunami source mechanism than a collapsing impact cavity or direct air blast alone, but not necessarily due to the originally-proposed mechanism. This result has significant implications for asteroid impact risk assessment and airburst-generated tsunami will be the focus of a NASA-sponsored workshop at the Ames Research Center next summer, with follow-on funding expected.« less

  8. CALM: Complex Adaptive System (CAS)-Based Decision Support for Enabling Organizational Change

    NASA Astrophysics Data System (ADS)

    Adler, Richard M.; Koehn, David J.

    Guiding organizations through transformational changes such as restructuring or adopting new technologies is a daunting task. Such changes generate workforce uncertainty, fear, and resistance, reducing morale, focus and performance. Conventional project management techniques fail to mitigate these disruptive effects, because social and individual changes are non-mechanistic, organic phenomena. CALM (for Change, Adaptation, Learning Model) is an innovative decision support system for enabling change based on CAS principles. CALM provides a low risk method for validating and refining change strategies that combines scenario planning techniques with "what-if" behavioral simulation. In essence, CALM "test drives" change strategies before rolling them out, allowing organizations to practice and learn from virtual rather than actual mistakes. This paper describes the CALM modeling methodology, including our metrics for measuring organizational readiness to respond to change and other major CALM scenario elements: prospective change strategies; alternate futures; and key situational dynamics. We then describe CALM's simulation engine for projecting scenario outcomes and its associated analytics. CALM's simulator unifies diverse behavioral simulation paradigms including: adaptive agents; system dynamics; Monte Carlo; event- and process-based techniques. CALM's embodiment of CAS dynamics helps organizations reduce risk and improve confidence and consistency in critical strategies for enabling transformations.

  9. Modeling and simulation of deformation of hydrogels responding to electric stimulus.

    PubMed

    Li, Hua; Luo, Rongmo; Lam, K Y

    2007-01-01

    A model for simulation of pH-sensitive hydrogels is refined in this paper to extend its application to electric-sensitive hydrogels, termed the refined multi-effect-coupling electric-stimulus (rMECe) model. By reformulation of the fixed-charge density and consideration of finite deformation, the rMECe model is able to predict the responsive deformations of the hydrogels when they are immersed in a bath solution subject to externally applied electric field. The rMECe model consists of nonlinear partial differential governing equations with chemo-electro-mechanical coupling effects and the fixed-charge density with electric-field effect. By comparison between simulation and experiment extracted from literature, the model is verified to be accurate and stable. The rMECe model performs quantitatively for deformation analysis of the electric-sensitive hydrogels. The influences of several physical parameters, including the externally applied electric voltage, initial fixed-charge density, hydrogel strip thickness, ionic strength and valence of surrounding solution, are discussed in detail on the displacement and average curvature of the hydrogels.

  10. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning

    PubMed Central

    Matsunaga, Yasuhiro

    2018-01-01

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137

  11. Improved Atomistic Monte Carlo Simulations Demonstrate that Poly-L-Proline Adopts Heterogeneous Ensembles of Conformations of Semi-Rigid Segments Interrupted by Kinks

    PubMed Central

    Radhakrishnan, Aditya; Vitalis, Andreas; Mao, Albert H.; Steffen, Adam T.; Pappu, Rohit V.

    2012-01-01

    Poly-L-proline (PLP) polymers are useful mimics of biologically relevant proline-rich sequences. Biophysical and computational studies of PLP polymers in aqueous solutions are challenging because of the diversity of length scales and the slow time scales for conformational conversions. We describe an atomistic simulation approach that combines an improved ABSINTH implicit solvation model, with conformational sampling based on standard and novel Metropolis Monte Carlo moves. Refinements to forcefield parameters were guided by published experimental data for proline-rich systems. We assessed the validity of our simulation results through quantitative comparisons to experimental data that were not used in refining the forcefield parameters. Our analysis shows that PLP polymers form heterogeneous ensembles of conformations characterized by semi-rigid, rod-like segments interrupted by kinks, which result from a combination of internal cis peptide bonds, flexible backbone ψ-angles, and the coupling between ring puckering and backbone degrees of freedom. PMID:22329658

  12. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning.

    PubMed

    Matsunaga, Yasuhiro; Sugita, Yuji

    2018-05-03

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doughty, Daniel Harvey; Crafts, Chris C.

    This manual defines a complete body of abuse tests intended to simulate actual use and abuse conditions that may be beyond the normal safe operating limits experienced by electrical energy storage systems used in electric and hybrid electric vehicles. The tests are designed to provide a common framework for abuse testing various electrical energy storage systems used in both electric and hybrid electric vehicle applications. The manual incorporates improvements and refinements to test descriptions presented in the Society of Automotive Engineers Recommended Practice SAE J2464 ''Electric Vehicle Battery Abuse Testing'' including adaptations to abuse tests to address hybrid electric vehiclemore » applications and other energy storage technologies (i.e., capacitors). These (possibly destructive) tests may be used as needed to determine the response of a given electrical energy storage system design under specifically defined abuse conditions. This manual does not provide acceptance criteria as a result of the testing, but rather provides results that are accurate and fair and, consequently, comparable to results from abuse tests on other similar systems. The tests described are intended for abuse testing any electrical energy storage system designed for use in electric or hybrid electric vehicle applications whether it is composed of batteries, capacitors, or a combination of the two.« less

  14. The chorioallantoic membrane (CAM) assay for the study of human bone regeneration: a refinement animal model for tissue engineering

    NASA Astrophysics Data System (ADS)

    Moreno-Jiménez, Inés; Hulsart-Billstrom, Gry; Lanham, Stuart A.; Janeczek, Agnieszka A.; Kontouli, Nasia; Kanczler, Janos M.; Evans, Nicholas D.; Oreffo, Richard Oc

    2016-08-01

    Biomaterial development for tissue engineering applications is rapidly increasing but necessitates efficacy and safety testing prior to clinical application. Current in vitro and in vivo models hold a number of limitations, including expense, lack of correlation between animal models and human outcomes and the need to perform invasive procedures on animals; hence requiring new predictive screening methods. In the present study we tested the hypothesis that the chick embryo chorioallantoic membrane (CAM) can be used as a bioreactor to culture and study the regeneration of human living bone. We extracted bone cylinders from human femoral heads, simulated an injury using a drill-hole defect, and implanted the bone on CAM or in vitro control-culture. Micro-computed tomography (μCT) was used to quantify the magnitude and location of bone volume changes followed by histological analyses to assess bone repair. CAM blood vessels were observed to infiltrate the human bone cylinder and maintain human cell viability. Histological evaluation revealed extensive extracellular matrix deposition in proximity to endochondral condensations (Sox9+) on the CAM-implanted bone cylinders, correlating with a significant increase in bone volume by μCT analysis (p < 0.01). This human-avian system offers a simple refinement model for animal research and a step towards a humanized in vivo model for tissue engineering.

  15. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  16. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  17. NASA Tech Briefs, October 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Cryogenic Temperature-Gradient Foam/Substrate Tensile Tester; Flight Test of an Intelligent Flight-Control System; Slat Heater Boxes for Thermal Vacuum Testing; System for Testing Thermal Insulation of Pipes; Electrical-Impedance-Based Ice-Thickness Gauges; Simulation System for Training in Laparoscopic Surgery; Flasher Powered by Photovoltaic Cells and Ultracapacitors; Improved Autoassociative Neural Networks; Toroidal-Core Microinductors Biased by Permanent Magnets; Using Correlated Photons to Suppress Background Noise; Atmospheric-Fade-Tolerant Tracking and Pointing in Wireless Optical Communication; Curved Focal-Plane Arrays Using Back-Illuminated High-Purity Photodetectors; Software for Displaying Data from Planetary Rovers; Software for Refining or Coarsening Computational Grids; Software for Diagnosis of Multiple Coordinated Spacecraft; Software Helps Retrieve Information Relevant to the User; Software for Simulating a Complex Robot; Software for Planning Scientific Activities on Mars; Software for Training in Pre-College Mathematics; Switching and Rectification in Carbon-Nanotube Junctions; Scandia-and-Yttria-Stabilized Zirconia for Thermal Barriers; Environmentally Safer, Less Toxic Fire-Extinguishing Agents; Multiaxial Temperature- and Time-Dependent Failure Model; Cloverleaf Vibratory Microgyroscope with Integrated Post; Single-Vector Calibration of Wind-Tunnel Force Balances; Microgyroscope with Vibrating Post as Rotation Transducer; Continuous Tuning and Calibration of Vibratory Gyroscopes; Compact, Pneumatically Actuated Filter Shuttle; Improved Bearingless Switched-Reluctance Motor; Fluorescent Quantum Dots for Biological Labeling; Growing Three-Dimensional Corneal Tissue in a Bioreactor; Scanning Tunneling Optical Resonance Microscopy; The Micro-Arcsecond Metrology Testbed; Detecting Moving Targets by Use of Soliton Resonances; and Finite-Element Methods for Real-Time Simulation of Surgery.

  18. LightForce: An Update on Orbital Collision Avoidance Using Photon Pressure

    NASA Technical Reports Server (NTRS)

    Stupl, Jan; Mason, James; De Vries, Willem; Smith, Craig; Levit, Creon; Marshall, William; Salas, Alberto Guillen; Pertica, Alexander; Olivier, Scot; Ting, Wang

    2012-01-01

    We present an update on our research on collision avoidance using photon-pressure induced by ground-based lasers. In the past, we have shown the general feasibility of employing small orbit perturbations, induced by photon pressure from ground-based laser illumination, for collision avoidance in space. Possible applications would be protecting space assets from impacts with debris and stabilizing the orbital debris environment. Focusing on collision avoidance rather than de-orbit, the scheme avoids some of the security and liability implications of active debris removal, and requires less sophisticated hardware than laser ablation. In earlier research we concluded that one ground based system consisting of a 10 kW class laser, directed by a 1.5 m telescope with adaptive optics, could avoid a significant fraction of debris-debris collisions in low Earth orbit. This paper describes our recent efforts, which include refining our original analysis, employing higher fidelity simulations and performing experimental tracking tests. We investigate the efficacy of one or more laser ground stations for debris-debris collision avoidance and satellite protection using simulations to investigate multiple case studies. The approach includes modeling of laser beam propagation through the atmosphere, the debris environment (including actual trajectories and physical parameters), laser facility operations, and simulations of the resulting photon pressure. We also present the results of experimental laser debris tracking tests. These tests track potential targets of a first technical demonstration and quantify the achievable tracking performance.

  19. Deformable complex network for refining low-resolution X-ray structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu

    2015-10-27

    A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less

  20. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  1. Hypersonic, nonequilibrium flow over the FIRE 2 forebody at 1634 sec

    NASA Technical Reports Server (NTRS)

    Chambers, Lin Hartung

    1994-01-01

    The numerical simulation of hypersonic flow in thermochemical nonequilibrium over the forebody of the FIRE 2 vehicle at 1634 sec in its trajectory is described. The simulation was executed on a Cray C90 with the program Langley Aerodynamic Upwind Relaxation Algorithm (LAURA) 4.0.2. Code setup procedures and sample results, including grid refinement studies, are discussed. This simulation relates to a study of radiative heating predictions on aerobrake type vehicles.

  2. Mechanistic studies on reduced exercise performance and cardiac deconditioning with simulated zero gravity

    NASA Technical Reports Server (NTRS)

    Tipton, Charles M.

    1991-01-01

    The primary purpose of this research is to study the physiological mechanisms associated with the exercise performance of rats subjected to conditions of simulated weightlessness. A secondary purpose is to study related physiological changes associated with other systems. To facilitate these goals, a rodent suspension model was developed (Overton-Tipton) and a VO2 max testing procedure was perfected. Three methodological developments occurred during this past year deserving of mention. The first was the refinement of the tail suspension model so that (1) the heat dissipation functions of the caudal artery can be better utilized, and (2) the blood flow distribution to the tail would have less external constriction. The second was the development on a one-leg weight bearing model for use in simulated weightlessness studies concerned with change in muscle mass, muscle enzyme activity, and hind limb blood flow. The chemical body composition of 30 rats was determined and used to develop a prediction equation for percent fat using underwater weighing procedures to measure carcass specific gravity and to calculate body density, body fat, and fat free mass.

  3. FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy

    PubMed Central

    Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.

    2010-01-01

    Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698

  4. Cosmic Ray Inspection and Passive Tomography for SNM Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armitage, John; Oakham, Gerald; Bryman, Douglas

    2009-12-02

    The Cosmic Ray Inspection and Passive Tomography (CRIPT) project has recently started investigating the detection of illicit Special Nuclear Material in cargo using cosmic ray muon tomography and complementary neutron detectors. We are currently performing simulation studies to help with the design of small scale prototypes. Based on the prototype tests and refined simulations, we will determine whether the muon tracking system for the full scale prototype will be based on drift chambers or extruded scintillator trackers. An analysis of the operations of the Port of Montreal has determined how long muon scan times should take if all or amore » subset of the cargo is to be screened. As long as the throughput of the muon system(s) is equal to the rate at which containers are unloaded from ships, the impact on port operations would not be great if a muon scanning stage were required for all cargo. We also show preliminary simulation results indicating that excellent separation between Al, Fe and Pb is possible under ideal conditions. The discrimination power is reduced but still significant when realistic momentum resolution measurements are considered.« less

  5. Simulating the WFIRST coronagraph integral field spectrograph

    NASA Astrophysics Data System (ADS)

    Rizzo, Maxime J.; Groff, Tyler D.; Zimmermann, Neil T.; Gong, Qian; Mandell, Avi M.; Saxena, Prabal; McElwain, Michael W.; Roberge, Aki; Krist, John; Riggs, A. J. Eldorado; Cady, Eric J.; Mejia Prada, Camilo; Brandt, Timothy; Douglas, Ewan; Cahoy, Kerri

    2017-09-01

    A primary goal of direct imaging techniques is to spectrally characterize the atmospheres of planets around other stars at extremely high contrast levels. To achieve this goal, coronagraphic instruments have favored integral field spectrographs (IFS) as the science cameras to disperse the entire search area at once and obtain spectra at each location, since the planet position is not known a priori. These spectrographs are useful against confusion from speckles and background objects, and can also help in the speckle subtraction and wavefront control stages of the coronagraphic observation. We present a software package, the Coronagraph and Rapid Imaging Spectrograph in Python (crispy) to simulate the IFS of the WFIRST Coronagraph Instrument (CGI). The software propagates input science cubes using spatially and spectrally resolved coronagraphic focal plane cubes, transforms them into IFS detector maps and ultimately reconstructs the spatio-spectral input scene as a 3D datacube. Simulated IFS cubes can be used to test data extraction techniques, refine sensitivity analyses and carry out design trade studies of the flight CGI-IFS instrument. crispy is a publicly available Python package and can be adapted to other IFS designs.

  6. Cosmic Ray Inspection and Passive Tomography for SNM Detection

    NASA Astrophysics Data System (ADS)

    Armitage, John; Bryman, Douglas; Cousins, Thomas; Gallant, Grant; Jason, Andrew; Jonkmans, Guy; Noël, Scott; Oakham, Gerald; Stocki, Trevor J.; Waller, David

    2009-12-01

    The Cosmic Ray Inspection and Passive Tomography (CRIPT) project has recently started investigating the detection of illicit Special Nuclear Material in cargo using cosmic ray muon tomography and complementary neutron detectors. We are currently performing simulation studies to help with the design of small scale prototypes. Based on the prototype tests and refined simulations, we will determine whether the muon tracking system for the full scale prototype will be based on drift chambers or extruded scintillator trackers. An analysis of the operations of the Port of Montreal has determined how long muon scan times should take if all or a subset of the cargo is to be screened. As long as the throughput of the muon system(s) is equal to the rate at which containers are unloaded from ships, the impact on port operations would not be great if a muon scanning stage were required for all cargo. We also show preliminary simulation results indicating that excellent separation between Al, Fe and Pb is possible under ideal conditions. The discrimination power is reduced but still significant when realistic momentum resolution measurements are considered.

  7. Resistance of class C fly ash belite cement to simulated sodium sulphate radioactive liquid waste attack.

    PubMed

    Guerrero, A; Goñi, S; Allegro, V R

    2009-01-30

    The resistance of class C fly ash belite cement (FABC-2-W) to concentrated sodium sulphate salts associated with low level wastes (LLW) and medium level wastes (MLW) is discussed. This study was carried out according to the Koch and Steinegger methodology by testing the flexural strength of mortars immersed in simulated radioactive liquid waste rich in sulphate (48,000 ppm) and demineralised water (used as a reference), at 20 degrees C and 40 degrees C over a period of 180 days. The reaction mechanisms of sulphate ion with the mortar was carried out through a microstructure study, which included the use of Scanning electron microscopy (SEM), porosity and pore-size distribution and X-ray diffraction (XRD). The results showed that the FABC mortar was stable against simulated sulphate radioactive liquid waste (SSRLW) attack at the two chosen temperatures. The enhancement of mechanical properties was a result of the formation of non-expansive ettringite inside the pores and an alkaline activation of the hydraulic activity of cement promoted by the ingress of sulphate. Accordingly, the microstructure was strongly refined.

  8. Scale refinement and initial evaluation of a behavioral health function measurement tool for work disability evaluation.

    PubMed

    Marfeo, Elizabeth E; Ni, Pengsheng; Haley, Stephen M; Bogusz, Kara; Meterko, Mark; McDonough, Christine M; Chan, Leighton; Rasch, Elizabeth K; Brandt, Diane E; Jette, Alan M

    2013-09-01

    To use item response theory (IRT) data simulations to construct and perform initial psychometric testing of a newly developed instrument, the Social Security Administration Behavioral Health Function (SSA-BH) instrument, that aims to assess behavioral health functioning relevant to the context of work. Cross-sectional survey followed by IRT calibration data simulations. Community. Sample of individuals applying for Social Security Administration disability benefits: claimants (n=1015) and a normative comparative sample of U.S. adults (n=1000). None. SSA-BH measurement instrument. IRT analyses supported the unidimensionality of 4 SSA-BH scales: mood and emotions (35 items), self-efficacy (23 items), social interactions (6 items), and behavioral control (15 items). All SSA-BH scales demonstrated strong psychometric properties including reliability, accuracy, and breadth of coverage. High correlations of the simulated 5- or 10-item computer adaptive tests with the full item bank indicated robust ability of the computer adaptive testing approach to comprehensively characterize behavioral health function along 4 distinct dimensions. Initial testing and evaluation of the SSA-BH instrument demonstrated good accuracy, reliability, and content coverage along all 4 scales. Behavioral function profiles of Social Security Administration claimants were generated and compared with age- and sex-matched norms along 4 scales: mood and emotions, behavioral control, social interactions, and self-efficacy. Using the computer adaptive test-based approach offers the ability to collect standardized, comprehensive functional information about claimants in an efficient way, which may prove useful in the context of the Social Security Administration's work disability programs. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. A Generic Force Field for Protein Coarse-Grained Molecular Dynamics Simulation

    PubMed Central

    Gu, Junfeng; Bai, Fang; Li, Honglin; Wang, Xicheng

    2012-01-01

    Coarse-grained (CG) force fields have become promising tools for studies of protein behavior, but the balance of speed and accuracy is still a challenge in the research of protein coarse graining methodology. In this work, 20 CG beads have been designed based on the structures of amino acid residues, with which an amino acid can be represented by one or two beads, and a CG solvent model with five water molecules was adopted to ensure the consistence with the protein CG beads. The internal interactions in protein were classified according to the types of the interacting CG beads, and adequate potential functions were chosen and systematically parameterized to fit the energy distributions. The proposed CG force field has been tested on eight proteins, and each protein was simulated for 1000 ns. Even without any extra structure knowledge of the simulated proteins, the Cα root mean square deviations (RMSDs) with respect to their experimental structures are close to those of relatively short time all atom molecular dynamics simulations. However, our coarse grained force field will require further refinement to improve agreement with and persistence of native-like structures. In addition, the root mean square fluctuations (RMSFs) relative to the average structures derived from the simulations show that the conformational fluctuations of the proteins can be sampled. PMID:23203075

  10. Plasma Vehicle Charging Analysis for Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Lallement, L.; McDonald, T.; Norgard, J.; Scully, B.

    2014-01-01

    In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the thermal protection system, but would definitely be required for future GEO, trans-lunar, and extra-lunar missions...

  11. Plasma Vehicle Charging Analysis for Orion Flight Test 1

    NASA Technical Reports Server (NTRS)

    Scully, B.; Norgard, J.

    2015-01-01

    In preparation for the upcoming experimental test flight for the Orion crew module, considerable interest was raised over the possibility of exposure to elevated levels of plasma activity and vehicle charging both externally on surfaces and internally on dielectrics during the flight test orbital operations. Initial analysis using NASCAP-2K indicated very high levels of exposure, and this generated additional interest in refining/defining the plasma and spacecraft models used in the analysis. This refinement was pursued, resulting in the use of specific AE8 and AP8 models, rather than SCATHA models, as well as consideration of flight trajectory, time duration, and other parameters possibly affecting the levels of exposure and the magnitude of charge deposition. Analysis using these refined models strongly indicated that, for flight test operations, no special surface coatings were necessary for the Thermal Protection System (TPS), but would definitely be required for future GEO, trans-lunar, and extra-lunar missions.

  12. Peptide crystal simulations reveal hidden dynamics

    PubMed Central

    Janowski, Pawel A.; Cerutti, David S.; Holton, James; Case, David A.

    2013-01-01

    Molecular dynamics simulations of biomolecular crystals at atomic resolution have the potential to recover information on dynamics and heterogeneity hidden in the X-ray diffraction data. We present here 9.6 microseconds of dynamics in a small helical peptide crystal with 36 independent copies of the unit cell. The average simulation structure agrees with experiment to within 0.28 Å backbone and 0.42 Å all-atom rmsd; a model refined against the average simulation density agrees with the experimental structure to within 0.20 Å backbone and 0.33 Å all-atom rmsd. The R-factor between the experimental structure factors and those derived from this unrestrained simulation is 23% to 1.0 Å resolution. The B-factors for most heavy atoms agree well with experiment (Pearson correlation of 0.90), but B-factors obtained by refinement against the average simulation density underestimate the coordinate fluctuations in the underlying simulation where the simulation samples alternate conformations. A dynamic flow of water molecules through channels within the crystal lattice is observed, yet the average water density is in remarkable agreement with experiment. A minor population of unit cells is characterized by reduced water content, 310 helical propensity and a gauche(−) side-chain rotamer for one of the valine residues. Careful examination of the experimental data suggests that transitions of the helices are a simulation artifact, although there is indeed evidence for alternate valine conformers and variable water content. This study highlights the potential for crystal simulations to detect dynamics and heterogeneity in experimental diffraction data, as well as to validate computational chemistry methods. PMID:23631449

  13. Grain refinement of high strength steels to improve cryogenic toughness

    NASA Technical Reports Server (NTRS)

    Rush, H. F.

    1985-01-01

    Grain-refining techniques using multistep heat treatments to reduce the grain size of five commercial high-strength steels were investigated. The goal of this investigation was to improve the low-temperature toughness as measured by Charpy V-notch impact test without a significant loss in tensile strength. The grain size of four of five alloys investigated was successfully reduced up to 1/10 of original size or smaller with increases in Charpy impact energy of 50 to 180 percent at -320 F. Tensile properties were reduced from 0 to 25 percent for the various alloys tested. An unexpected but highly beneficial side effect from grain refining was improved machinability.

  14. An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating

    NASA Astrophysics Data System (ADS)

    Ratcliffe, M. J.; Lieven, N. A. J.

    1999-03-01

    Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.

  15. Basic Geometric Support of Systems for Earth Observation from Geostationary and Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.

    2017-12-01

    A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.

  16. Numerical Simulation And Experimental Investigation Of The Lift-Off And Blowout Of Enclosed Laminar Flames

    NASA Technical Reports Server (NTRS)

    Venuturmilli, Rajasekhar; Zhang, Yong; Chen, Lea-Der

    2003-01-01

    Enclosed flames are found in many industrial applications such as power plants, gas-turbine combustors and jet engine afterburners. A better understanding of the burner stability limits can lead to development of combustion systems that extend the lean and rich limits of combustor operations. This paper reports a fundamental study of the stability limits of co-flow laminar jet diffusion flames. A numerical study was conducted that used an adaptive mesh refinement scheme in the calculation. Experiments were conducted in two test rigs with two different fuels and diluted with three inert species. The numerical stability limits were compared with microgravity experimental data. Additional normal-gravity experimental results were also presented.

  17. Influence of oil composition on the formation of fatty acid esters of 2-chloropropane-1,3-diol (2-MCPD) and 3-chloropropane-1,2-diol (3-MCPD) under conditions simulating oil refining.

    PubMed

    Ermacora, Alessia; Hrncirik, Karel

    2014-10-15

    The toxicological relevance and widespread occurrence of fatty acid esters of 2-chloropropane-1,3-diol (2-MCPD) and 3-chloropropane-1,2-diol (3-MCPD) in refined oils and fats have recently triggered an interest in the mechanism of formation and decomposition of these contaminants during oil processing. In this work, the effect of the main precursors, namely acylglycerols and chlorinated compounds, on the formation yield of MCPD esters was investigated in model systems simulating oil deodorization. The composition of the oils was modified by enzymatic hydrolysis, silica gel purification and application of various refining steps prior to deodorization (namely degumming, neutralization, bleaching). Partial acylglycerols showed greater ability, than did triacylglycerols, to form MCPD esters. However, no direct correlation was found between these two parameters, since the availability of chloride ions was the main limiting factor in the formation reaction. Polar chlorinated compounds were found to be the main chloride donors, although the presence of reactive non-polar chloride-donating species was also observed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Adaptive wall technology for minimization of wall interferences in transonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.

    1988-01-01

    Modern experimental techniques to improve free air simulations in transonic wind tunnels by use of adaptive wall technology are reviewed. Considered are the significant advantages of adaptive wall testing techniques with respect to wall interferences, Reynolds number, tunnel drive power, and flow quality. The application of these testing techniques relies on making the test section boundaries adjustable and using a rapid wall adjustment procedure. A historical overview shows how the disjointed development of these testing techniques, since 1938, is closely linked to available computer support. An overview of Adaptive Wall Test Section (AWTS) designs shows a preference for use of relatively simple designs with solid adaptive walls in 2- and 3-D testing. Operational aspects of AWTS's are discussed with regard to production type operation where adaptive wall adjustments need to be quick. Both 2- and 3-D data are presented to illustrate the quality of AWTS data over the transonic speed range. Adaptive wall technology is available for general use in 2-D testing, even in cryogenic wind tunnels. In 3-D testing, more refinement of the adaptive wall testing techniques is required before more widespread use can be planned.

  19. Realistic mass ratio magnetic reconnection simulations with the Multi Level Multi Domain method

    NASA Astrophysics Data System (ADS)

    Innocenti, Maria Elena; Beck, Arnaud; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    Space physics simulations with the ambition of realistically representing both ion and electron dynamics have to be able to cope with the huge scale separation between the electron and ion parameters while respecting the stability constraints of the numerical method of choice. Explicit Particle In Cell (PIC) simulations with realistic mass ratio are limited in the size of the problems they can tackle by the restrictive stability constraints of the explicit method (Birdsall and Langdon, 2004). Many alternatives are available to reduce such computation costs. Reduced mass ratios can be used, with the caveats highlighted in Bret and Dieckmann (2010). Fully implicit (Chen et al., 2011a; Markidis and Lapenta, 2011) or semi implicit (Vu and Brackbill, 1992; Lapenta et al., 2006; Cohen et al., 1989) methods can bypass the strict stability constraints of explicit PIC codes. Adaptive Mesh Refinement (AMR) techniques (Vay et al., 2004; Fujimoto and Sydora, 2008) can be employed to change locally the simulation resolution. We focus here on the Multi Level Multi Domain (MLMD) method introduced in Innocenti et al. (2013) and Beck et al. (2013). The method combines the advantages of implicit algorithms and adaptivity. Two levels are fully simulated with fields and particles. The so called "refined level" simulates a fraction of the "coarse level" with a resolution RF times bigger than the coarse level resolution, where RF is the Refinement Factor between the levels. This method is particularly suitable for magnetic reconnection simulations (Biskamp, 2005), where the characteristic Ion and Electron Diffusion Regions (IDR and EDR) develop at the ion and electron scales respectively (Daughton et al., 2006). In Innocenti et al. (2013) we showed that basic wave and instability processes are correctly reproduced by MLMD simulations. In Beck et al. (2013) we applied the technique to plasma expansion and magnetic reconnection problems. We showed that notable computational time savings can be achieved. More importantly, we were able to correctly reproduce EDR features, such as the inversion layer of the electric field observed in Chen et al. (2011b), with a MLMD simulation at a significantly lower cost. Here, we present recent results on EDR dynamics achieved with the MLMD method and a realistic mass ratio.

  20. Hydrodynamic Modeling of Air Blast Propagation from the Humble Redwood Chemical High Explosive Detonations Using GEODYN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, V D

    Two-dimensional axisymmetric hydrodynamic models were developed using GEODYN to simulate the propagation of air blasts resulting from a series of high explosive detonations conducted at Kirtland Air Force Base in August and September of 2007. Dubbed Humble Redwood I (HR-1), these near-surface chemical high explosive detonations consisted of seven shots of varying height or depth of burst. Each shot was simulated numerically using GEODYN. An adaptive mesh refinement scheme based on air pressure gradients was employed such that the mesh refinement tracked the advancing shock front where sharp discontinuities existed in the state variables, but allowed the mesh to sufficientlymore » relax behind the shock front for runtime efficiency. Comparisons of overpressure, sound speed, and positive phase impulse from the GEODYN simulations were made to the recorded data taken from each HR-1 shot. Where the detonations occurred above ground or were shallowly buried (no deeper than 1 m), the GEODYN model was able to simulate the sound speeds, peak overpressures, and positive phase impulses to within approximately 1%, 23%, and 6%, respectively, of the actual recorded data, supporting the use of numerical simulation of the air blast as a forensic tool in determining the yield of an otherwise unknown explosion.« less

  1. Potential human cholesterol esterase inhibitor design: benefits from the molecular dynamics simulations and pharmacophore modeling studies.

    PubMed

    John, Shalini; Thangapandian, Sundarapandian; Lee, Keun Woo

    2012-01-01

    Human pancreatic cholesterol esterase (hCEase) is one of the lipases found to involve in the digestion of large and broad spectrum of substrates including triglycerides, phospholipids, cholesteryl esters, etc. The presence of bile salts is found to be very important for the activation of hCEase. Molecular dynamic simulations were performed for the apoform and bile salt complexed form of hCEase using the co-ordinates of two bile salts from bovine CEase. The stability of the systems throughout the simulation time was checked and two representative structures from the highly populated regions were selected using cluster analysis. These two representative structures were used in pharmacophore model generation. The generated pharmacophore models were validated and used in database screening. The screened hits were refined for their drug-like properties based on Lipinski's rule of five and ADMET properties. The drug-like compounds were further refined by molecular docking simulation using GOLD program based on the GOLD fitness score, mode of binding, and molecular interactions with the active site amino acids. Finally, three hits of novel scaffolds were selected as potential leads to be used in novel and potent hCEase inhibitor design. The stability of binding modes and molecular interactions of these final hits were re-assured by molecular dynamics simulations.

  2. Multiple methods for multiple futures: Integrating qualitative scenario planning and quantitative simulation modeling for natural resource decision making

    USGS Publications Warehouse

    Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.

    2017-01-01

    Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.

  3. Simulations of coupled, Antarctic ice-ocean evolution using POP2x and BISICLES (Invited)

    NASA Astrophysics Data System (ADS)

    Price, S. F.; Asay-Davis, X.; Martin, D. F.; Maltrud, M. E.; Hoffman, M. J.

    2013-12-01

    We present initial results from Antarctic, ice-ocean coupled simulations using large-scale ocean circulation and land ice evolution models. The ocean model, POP2x is a modified version of POP, a fully eddying, global-scale ocean model (Smith and Gent, 2002). POP2x allows for circulation beneath ice shelf cavities using the method of partial top cells (Losch, 2008). Boundary layer physics, which control fresh water and salt exchange at the ice-ocean interface, are implemented following Holland and Jenkins (1999), Jenkins (1999), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008; Kimura et al., 2013) and with results from other idealized ice-ocean coupling test cases (e.g., Goldberg et al., 2012). The land ice model, BISICLES (Cornford et al., 2012), includes a 1st-order accurate momentum balance (L1L2) and uses block structured, adaptive-mesh refinement to more accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. For idealized test cases focused on marine-ice sheet dynamics, BISICLES output compares very favorably relative to simulations based on the full, nonlinear Stokes momentum balance (MISMIP-3d; Pattyn et al., 2013). Here, we present large-scale (southern ocean) simulations using POP2x with fixed ice shelf geometries, which are used to obtain and validate modeled submarine melt rates against observations. These melt rates are, in turn, used to force evolution of the BISICLES model. An offline-coupling scheme, which we compare with the ice-ocean coupling work of Goldberg et al. (2012), is then used to sequentially update the sub-shelf cavity geometry seen by POP2x.

  4. Noise Simulations of the High-Lift Common Research Model

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.; Vatsa, Veer N.; O'Connell, Matthew D.; Duda, Benjamin; Fares, Ehab

    2017-01-01

    The PowerFLOW(TradeMark) code has been used to perform numerical simulations of the high-lift version of the Common Research Model (HL-CRM) that will be used for experimental testing of airframe noise. Time-averaged surface pressure results from PowerFLOW(TradeMark) are found to be in reasonable agreement with those from steady-state computations using FUN3D. Surface pressure fluctuations are highest around the slat break and nacelle/pylon region, and synthetic array beamforming results also indicate that this region is the dominant noise source on the model. The gap between the slat and pylon on the HL-CRM is not realistic for modern aircraft, and most nacelles include a chine that is absent in the baseline model. To account for those effects, additional simulations were completed with a chine and with the slat extended into the pylon. The case with the chine was nearly identical to the baseline, and the slat extension resulted in higher surface pressure fluctuations but slightly reduced radiated noise. The full-span slat geometry without the nacelle/pylon was also simulated and found to be around 10 dB quieter than the baseline over almost the entire frequency range. The current simulations are still considered preliminary as changes in the radiated acoustics are still being observed with grid refinement, and additional simulations with finer grids are planned.

  5. Economical graphics display system for flight simulation avionics

    NASA Technical Reports Server (NTRS)

    1990-01-01

    During the past academic year the focal point of this project has been to enhance the economical flight simulator system by incorporating it into the aero engineering educational environment. To accomplish this goal it was necessary to develop appropriate software modules that provide a foundation for student interaction with the system. In addition experiments had to be developed and tested to determine if they were appropriate for incorporation into the beginning flight simulation course, AERO-41B. For the most part these goals were accomplished. Experiments were developed and evaluated by graduate students. More work needs to be done in this area. The complexity and length of the experiments must be refined to match the programming experience of the target students. It was determined that few undergraduate students are ready to absorb the full extent and complexity of a real-time flight simulation. For this reason the experiments developed are designed to introduce basic computer architectures suitable for simulation, the programming environment and languages, the concept of math modules, evaluation of acquired data, and an introduction to the meaning of real-time. An overview is included of the system environment as it pertains to the students, an example of a flight simulation experiment performed by the students, and a summary of the executive programming modules created by the students to achieve a user-friendly multi-processor system suitable to an aero engineering educational program.

  6. Interactive simulations for quantum key distribution

    NASA Astrophysics Data System (ADS)

    Kohnle, Antje; Rizzoli, Aluna

    2017-05-01

    Secure communication protocols are becoming increasingly important, e.g. for internet-based communication. Quantum key distribution (QKD) allows two parties, commonly called Alice and Bob, to generate a secret sequence of 0s and 1s called a key that is only known to themselves. Classically, Alice and Bob could never be certain that their communication was not compromised by a malicious eavesdropper. Quantum mechanics however makes secure communication possible. The fundamental principle of quantum mechanics that taking a measurement perturbs the system (unless the measurement is compatible with the quantum state) also applies to an eavesdropper. Using appropriate protocols to create the key, Alice and Bob can detect the presence of an eavesdropper by errors in their measurements. As part of the QuVis Quantum Mechanics Visualisation Project, we have developed a suite of four interactive simulations that demonstrate the basic principles of three different QKD protocols. The simulations use either polarised photons or spin 1/2 particles as physical realisations. The simulations and accompanying activities are freely available for use online or download, and run on a wide range of devices including tablets and PCs. Evaluation with students over three years was used to refine the simulations and activities. Preliminary studies show that the refined simulations and activities help students learn the basic principles of QKD at both the introductory and advanced undergraduate levels.

  7. Controlling the error on target motion through real-time mesh adaptation: Applications to deep brain stimulation.

    PubMed

    Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A

    2018-05-01

    An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Preliminary SAGE Simulations of Volcanic Jets Into a Stratified Atmosphere

    NASA Astrophysics Data System (ADS)

    Peterson, A. H.; Wohletz, K. H.; Ogden, D. E.; Gisler, G. R.; Glatzmaier, G. A.

    2007-12-01

    The SAGE (SAIC Adaptive Grid Eulerian) code employs adaptive mesh refinement in solving Eulerian equations of complex fluid flow desirable for simulation of volcanic eruptions. The goal of modeling volcanic eruptions is to better develop a code's predictive capabilities in order to understand the dynamics that govern the overall behavior of real eruption columns. To achieve this goal, we focus on the dynamics of underexpended jets, one of the fundamental physical processes important to explosive eruptions. Previous simulations of laboratory jets modeled in cylindrical coordinates were benchmarked with simulations in CFDLib (Los Alamos National Laboratory), which solves the full Navier-Stokes equations (includes viscous stress tensor), and showed close agreement, indicating that adaptive mesh refinement used in SAGE may offset the need for explicit calculation of viscous dissipation.We compare gas density contours of these previous simulations with the same initial conditions in cylindrical and Cartesian geometries to laboratory experiments to determine both the validity of the model and the robustness of the code. The SAGE results in both geometries are within several percent of the experiments for position and density of the incident (intercepting) and reflected shocks, slip lines, shear layers, and Mach disk. To expand our study into a volcanic regime, we simulate large-scale jets in a stratified atmosphere to establish the code's ability to model a sustained jet into a stable atmosphere.

  9. Refining historical limits method to improve disease cluster detection, New York City, New York, USA.

    PubMed

    Levin-Rector, Alison; Wilson, Elisha L; Fine, Annie D; Greene, Sharon K

    2015-02-01

    Since the early 2000s, the Bureau of Communicable Disease of the New York City Department of Health and Mental Hygiene has analyzed reportable infectious disease data weekly by using the historical limits method to detect unusual clusters that could represent outbreaks. This method typically produced too many signals for each to be investigated with available resources while possibly failing to signal during true disease outbreaks. We made method refinements that improved the consistency of case inclusion criteria and accounted for data lags and trends and aberrations in historical data. During a 12-week period in 2013, we prospectively assessed these refinements using actual surveillance data. The refined method yielded 74 signals, a 45% decrease from what the original method would have produced. Fewer and less biased signals included a true citywide increase in legionellosis and a localized campylobacteriosis cluster subsequently linked to live-poultry markets. Future evaluations using simulated data could complement this descriptive assessment.

  10. An intercomparison of GCM and RCM dynamical downscaling for characterizing the hydroclimatology of California and Nevada

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Rhoades, A.; Johansen, H.; Ullrich, P. A.; Collins, W. D.

    2017-12-01

    Dynamical downscaling is widely used to properly characterize regional surface heterogeneities that shape the local hydroclimatology. However, the factors in dynamical downscaling, including the refinement of model horizontal resolution, large-scale forcing datasets and dynamical cores, have not been fully evaluated. Two cutting-edge global-to-regional downscaling methods are used to assess these, specifically the variable-resolution Community Earth System Model (VR-CESM) and the Weather Research & Forecasting (WRF) regional climate model, under different horizontal resolutions (28, 14, and 7 km). Two groups of WRF simulations are driven by either the NCEP reanalysis dataset (WRF_NCEP) or VR-CESM outputs (WRF_VRCESM) to evaluate the effects of the large-scale forcing datasets. The impacts of dynamical core are assessed by comparing the VR-CESM simulations to the coupled WRF_VRCESM simulations with the same physical parameterizations and similar grid domains. The simulated hydroclimatology (i.e., total precipitation, snow cover, snow water equivalent and surface temperature) are compared with the reference datasets. The large-scale forcing datasets are critical to the WRF simulations in more accurately simulating total precipitation, SWE and snow cover, but not surface temperature. Both the WRF and VR-CESM results highlight that no significant benefit is found in the simulated hydroclimatology by just increasing horizontal resolution refinement from 28 to 7 km. Simulated surface temperature is sensitive to the choice of dynamical core. WRF generally simulates higher temperatures than VR-CESM, alleviates the systematic cold bias of DJF temperatures over the California mountain region, but overestimates the JJA temperature in California's Central Valley.

  11. Analyses of Simulated Reconnection-Driven Solar Polar Jets

    NASA Astrophysics Data System (ADS)

    Roberts, M. A.; Uritsky, V. M.; Karpen, J. T.; DeVore, C. R.

    2014-12-01

    Solar polar jets are observed to originate in regions within the open field of solar coronal holes. These so called "anemone" regions are generally accepted to be regions of opposite polarity, and are associated with an embedded dipole topology, consisting of a fan-separatrix and a spine line emanating from a null point occurring at the top of the dome shaped fan surface. Previous analysis of these jets (Pariat et al. 2009,2010) modeled using the Adaptively Refined Magnetohydrodynamics Solver (ARMS) has supported the claim that magnetic reconnection across current sheets formed at the null point between the highly twisted closed field of the dipole and open field lines surrounding it releases the energy necessary to drive these jets. However, these initial simulations assumed a "static" environment for the jets, neglecting effects due to gravity, solar wind and the expanding spherical geometry. A new set of ARMS simulations taking into account these additional physical processes was recently performed. Initial results are qualitatively consistent with the earlier Cartesian studies, demonstrating the robustness of the underlying ideal and resistive mechanisms. We focus on density and velocity fluctuations within a narrow radial slit aligned with the direction of the spine of the jet, as well as other physical properties, in order to identify and refine their signatures in the lower heliosphere. These refined signatures can be used as parameters by which plasma processes initiated by these jets may be identified in situ by future missions such as Solar Orbiter and Solar Probe Plus.

  12. M and D SIG progress report: Laboratory simulations of LDEF impact features

    NASA Technical Reports Server (NTRS)

    Horz, Friedrich; Bernhard, R. P.; See, Thomas H.; Atkinson, Dale R.; Allbrooks, Martha K.

    1991-01-01

    Reported here are impact simulations into pure Teflon and aluminum targets. These experiments will allow first order interpretations of impact features on the Long Duration Exposure Facility (LDEF), and they will serve as guides for dedicated experiments that employ the real LDEF blankets, both unexposed and exposed, for a refined understanding of the Long Duration Exposure Facility's collisional environment.

  13. Mechanical properties and biocorrosion resistance of the Mg-Gd-Nd-Zn-Zr alloy processed by equal channel angular pressing.

    PubMed

    Zhang, Junyi; Kang, Zhixin; Wang, Fen

    2016-11-01

    A Mg-Gd-Nd-Zn-Zr alloy was processed by equal channel angular pressing (ECAP) at 375°C. The grain size of Mg-Gd-Nd-Zn-Zr alloy was refined to ~2.5μm with the spherical precipitates (β1 phase) distributing in the matrix. The mechanical properties of ECAPed alloy were significantly improved as a result of the grain refinement and precipitation strengthening. The corrosion rate of the ECAPed magnesium alloy in simulated body fluid dramatically decreased from 0.236mm/a to 0.126mm/a due to the strong basal texture and refined microstructure. This wrought magnesium alloy shows potentials in biomedical application. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Aerodynamic design optimization via reduced Hessian SQP with solution refining

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    An all-at-once reduced Hessian Successive Quadratic Programming (SQP) scheme has been shown to be efficient for solving aerodynamic design optimization problems with a moderate number of design variables. This paper extends this scheme to allow solution refining. In particular, we introduce a reduced Hessian refining technique that is critical for making a smooth transition of the Hessian information from coarse grids to fine grids. Test results on a nozzle design using quasi-one-dimensional Euler equations show that through solution refining the efficiency and the robustness of the all-at-once reduced Hessian SQP scheme are significantly improved.

  15. Chiral pathways in DNA dinucleotides using gradient optimized refinement along metastable borders

    NASA Astrophysics Data System (ADS)

    Romano, Pablo; Guenza, Marina

    We present a study of DNA breathing fluctuations using Markov state models (MSM) with our novel refinement procedure. MSM have become a favored method of building kinetic models, however their accuracy has always depended on using a significant number of microstates, making the method costly. We present a method which optimizes macrostates by refining borders with respect to the gradient along the free energy surface. As the separation between macrostates contains highest discretization errors, this method corrects for any errors produced by limited microstate sampling. Using our refined MSM methods, we investigate DNA breathing fluctuations, thermally induced conformational changes in native B-form DNA. Running several microsecond MD simulations of DNA dinucleotides of varying sequences, to include sequence and polarity effects, we've analyzed using our refined MSM to investigate conformational pathways inherent in the unstacking of DNA bases. Our kinetic analysis has shown preferential chirality in unstacking pathways that may be critical in how proteins interact with single stranded regions of DNA. These breathing dynamics can help elucidate the connection between conformational changes and key mechanisms within protein-DNA recognition. NSF Chemistry Division (Theoretical Chemistry), the Division of Physics (Condensed Matter: Material Theory), XSEDE.

  16. Value of Collaboration With Standardized Patients and Patient Facilitators in Enhancing Reflection During the Process of Building a Simulation.

    PubMed

    Stanley, Claire; Lindsay, Sally; Parker, Kathryn; Kawamura, Anne; Samad Zubairi, Mohammad

    2018-05-09

    We previously reported that experienced clinicians find the process of collectively building and participating in simulations provide (1) a unique reflective opportunity; (2) a venue to identify different perspectives through discussion and action in a group; and (3) a safe environment for learning. No studies have assessed the value of collaborating with standardized patients (SPs) and patient facilitators (PFs) in the process. In this work, we describe this collaboration in building a simulation and the key elements that facilitate reflection. Three simulation scenarios surrounding communication were built by teams of clinicians, a PF, and SPs. Six build sessions were audio recorded, transcribed, and thematically analyzed through an iterative process to (1) describe the steps of building a simulation scenario and (2) identify the key elements involved in the collaboration. The five main steps to build a simulation scenario were (1) storytelling and reflection; (2) defining objectives and brainstorming ideas; (3) building a stem and creating a template; (4) refining the scenario with feedback from SPs; and (5) mock run-throughs with follow-up discussion. During these steps, the PF shared personal insights, challenging participants to reflect deeper to better understand and consider the patient's perspective. The SPs provided unique outside perspective to the group. In addition, the interaction between the SPs and the PF helped refine character roles. A collaborative approach incorporating feedback from PFs and SPs to create a simulation scenario is a valuable method to enhance reflective practice for clinicians.

  17. Absorption of wireless radiation in the child versus adult brain and eye from cell phone conversation or virtual reality.

    PubMed

    Fernández, C; de Salles, A A; Sears, M E; Morris, R D; Davis, D L

    2018-05-22

    Children's brains are more susceptible to hazardous exposures, and are thought to absorb higher doses of radiation from cell phones in some regions of the brain. Globally the numbers and applications of wireless devices are increasing rapidly, but since 1997 safety testing has relied on a large, homogenous, adult male head phantom to simulate exposures; the "Standard Anthropomorphic Mannequin" (SAM) is used to estimate only whether tissue temperature will be increased by more than 1 Celsius degree in the periphery. The present work employs anatomically based modeling currently used to set standards for surgical and medical devices, that incorporates heterogeneous characteristics of age and anatomy. Modeling of a cell phone held to the ear, or of virtual reality devices in front of the eyes, reveals that young eyes and brains absorb substantially higher local radiation doses than adults'. Age-specific simulations indicate the need to apply refined methods for regulatory compliance testing; and for public education regarding manufacturers' advice to keep phones off the body, and prudent use to limit exposures, particularly to protect the young. Copyright © 2018. Published by Elsevier Inc.

  18. Numerical Modeling of the Transient Chilldown Process of a Cryogenic Propellant Transfer Line

    NASA Technical Reports Server (NTRS)

    Hartwig, Jason; Vera, Jerry

    2015-01-01

    Before cryogenic fuel depots can be fully realized, efficient methods with which to chill down the spacecraft transfer line and receiver tank are required. This paper presents numerical modeling of the chilldown of a liquid hydrogen tank-to-tank propellant transfer line using the Generalized Fluid System Simulation Program (GFSSP). To compare with data from recently concluded turbulent LH2 chill down experiments, seven different cases were run across a range of inlet liquid temperatures and mass flow rates. Both trickle and pulse chill down methods were simulated. The GFSSP model qualitatively matches external skin mounted temperature readings, but large differences are shown between measured and predicted internal stream temperatures. Discrepancies are attributed to the simplified model correlation used to compute two-phase flow boiling heat transfer. Flow visualization from testing shows that the initial bottoming out of skin mounted sensors corresponds to annular flow, but that considerable time is required for the stream sensor to achieve steady state as the system moves through annular, churn, and bubbly flow. The GFSSP model does adequately well in tracking trends in the data but further work is needed to refine the two-phase flow modeling to better match observed test data.

  19. Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.

    2002-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  20. Verification and Validation of a Coordinate Transformation Method in Axisymmetric Transient Magnetics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.

    We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less

  1. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems.

    PubMed

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-10-28

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.

  2. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  3. On a High-Fidelity Hierarchical Approach to Buckling Load Calculations

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.

    2001-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  4. Component-Based Modelling for Scalable Smart City Systems Interoperability: A Case Study on Integrating Energy Demand Response Systems

    PubMed Central

    Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan

    2016-01-01

    Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829

  5. Stabilization of numerical interchange in spectral-element magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sovinec, C. R.

    In this study, auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C 0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate ofmore » the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. Lastly, the projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code [C. R. Sovinec, et al., J. Comput. Phys. 195 (2004) 355-386], provided that the projections introduce numerical dissipation.« less

  6. Stabilization of numerical interchange in spectral-element magnetohydrodynamics

    DOE PAGES

    Sovinec, C. R.

    2016-05-10

    In this study, auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C 0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate ofmore » the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. Lastly, the projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code [C. R. Sovinec, et al., J. Comput. Phys. 195 (2004) 355-386], provided that the projections introduce numerical dissipation.« less

  7. Statistical and simulation analysis of hydraulic-conductivity data for Bear Creek and Melton Valleys, Oak Ridge Reservation, Tennessee

    USGS Publications Warehouse

    Connell, J.F.; Bailey, Z.C.

    1989-01-01

    A total of 338 single-well aquifer tests from Bear Creek and Melton Valley, Tennessee were statistically grouped to estimate hydraulic conductivities for the geologic formations in the valleys. A cross-sectional simulation model linked to a regression model was used to further refine the statistical estimates for each of the formations and to improve understanding of ground-water flow in Bear Creek Valley. Median hydraulic-conductivity values were used as initial values in the model. Model-calculated estimates of hydraulic conductivity were generally lower than the statistical estimates. Simulations indicate that (1) the Pumpkin Valley Shale controls groundwater flow between Pine Ridge and Bear Creek; (2) all the recharge on Chestnut Ridge discharges to the Maynardville Limestone; (3) the formations having smaller hydraulic gradients may have a greater tendency for flow along strike; (4) local hydraulic conditions in the Maynardville Limestone cause inaccurate model-calculated estimates of hydraulic conductivity; and (5) the conductivity of deep bedrock neither affects the results of the model nor does it add information on the flow system. Improved model performance would require: (1) more water level data for the Copper Ridge Dolomite; (2) improved estimates of hydraulic conductivity in the Copper Ridge Dolomite and Maynardville Limestone; and (3) more water level data and aquifer tests in deep bedrock. (USGS)

  8. Deep learning approaches for detection and removal of ghosting artifacts in MR spectroscopy.

    PubMed

    Kyathanahally, Sreenath P; Döring, André; Kreis, Roland

    2018-09-01

    To make use of deep learning (DL) methods to detect and remove ghosting artifacts in clinical magnetic resonance spectra of human brain. Deep learning algorithms, including fully connected neural networks, deep-convolutional neural networks, and stacked what-where auto encoders, were implemented to detect and correct MR spectra containing spurious echo ghost signals. The DL methods were trained on a huge database of simulated spectra with and without ghosting artifacts that represent complex variations of ghost-ridden spectra, transformed to time-frequency spectrograms. The trained model was tested on simulated and in vivo spectra. The preliminary results for ghost detection are very promising, reaching almost 100% accuracy, and the DL ghost removal methods show potential in simulated and in vivo spectra, but need further refinement and quantitative testing. Ghosting artifacts in spectroscopy are problematic, as they superimpose with metabolites and lead to inaccurate quantification. Detection and removal of ghosting artifacts using traditional machine learning approaches with feature extraction/selection is difficult, as ghosts appear at different frequencies. Here, we show that DL methods perform extremely well for ghost detection if the spectra are treated as images in the form of time-frequency representations. Further optimization for in vivo spectra will hopefully confirm their "ghostbusting" capacity. Magn Reson Med 80:851-863, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Effects of grain refinement on the biocorrosion and in vitro bioactivity of magnesium.

    PubMed

    Saha, Partha; Roy, Mangal; Datta, Moni Kanchan; Lee, Boeun; Kumta, Prashant N

    2015-12-01

    Magnesium is a new class of biodegradable metals potentially suitable for bone fracture fixation due to its suitable mechanical properties, high degradability and biocompatibility. However, rapid corrosion and loss in mechanical strength under physiological conditions render it unsuitable for load-bearing applications. In the present study, grain refinement was implemented to control bio-corrosion demonstrating improved in vitro bioactivity of magnesium. Pure commercial magnesium was grain refined using different amounts of zirconium (0.25 and 1.0 wt.%). Corrosion behavior was studied by potentiodynamic polarization (PDP) and mass loss immersion tests demonstrating corrosion rate decrease with grain size reduction. In vitro biocompatibility tests conducted by MC3T3-E1 pre-osteoblast cells and measured by DNA quantification demonstrate significant increase in cell proliferation for Mg-1 wt.% Zr at day 5. Similarly, alkaline phosphatase (ALP) activity was higher for grain refined Mg. Alloys were also tested for ability to support osteoclast differentiation using RAW264.7 monocytes with receptor activator of nuclear factor kappa-β ligand (RANKL) supplemented cell culture. Osteoclast differentiation process was observed to be severely restricted for smaller grained Mg. Overall, the results indicate grain refinement to be useful not only for improving corrosion resistance of Mg implants for bone fixation devices but also potentially modulate bone regeneration around the implant. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Concept Development for Future Domains: A New Method of Knowledge Elicitation

    DTIC Science & Technology

    2005-06-01

    Procedure: U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) examined methods to generate, refine, test , and validate new...generate, elaborate, refine, describe, test , and validate new Future Force concepts relating to doctrine, tactics, techniques, procedures, unit and team...System (Harvey, 1993), and the Job Element Method (Primoff & Eyde , 1988). Figure 1 provides a more comprehensive list of task analytic methods. Please see

  11. Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao

    Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less

  12. Investigation on temporal evolution of the grain refinement in copper under high strain rate loading via in-situ synchrotron measurement and predictive modeling

    DOE PAGES

    Shah, Pooja Nitin; Shin, Yung C.; Sun, Tao

    2017-10-03

    Synchrotron X-rays are integrated with a modified Kolsky tension bar to conduct in situ tracking of the grain refinement mechanism operating during the dynamic deformation of metals. Copper with an initial average grain size of 36 μm is refined to 6.3 μm when loaded at a constant high strain rate of 1200 s -1. The synchrotron measurements revealed the temporal evolution of the grain refinement mechanism in terms of the initiation and rate of refinement throughout the loading test. A multiscale coupled probabilistic cellular automata based recrystallization model has been developed to predict the microstructural evolution occurring during dynamic deformationmore » processes. The model accurately predicts the initiation of the grain refinement mechanism with a predicted final average grain size of 2.4 μm. As a result, the model also accurately predicts the temporal evolution in terms of the initiation and extent of refinement when compared with the experimental results.« less

  13. Finite element modelling of creep crack growth in 316 stainless and 9Cr-1Mo steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnaswamy, P.; Brust, F.W.

    1994-09-01

    The failure behavior of steels under sustained and cyclic loads has been addressed. The constitutive behavior of the two steels have been represented by the conventional strain-hardening law and the Murakami-Ohno model for reversed and cyclic loads. The laws have been implemented into the research finite element code FVP. Post processors for FVP to calculate various path independent integral fracture parameters have been written. Compact tension C(T) specimens have been tested under sustained and cyclic loads with both the load point displacement and crack growth monitored during the tests. FE models with extremely refined meshes for the C(T) specimens weremore » prepared and the experiment simulated numerically. Results from this analysis focus on the differences between the various constitutive models as well as the fracture parameters in characterizing the creep crack growth of the two steels.« less

  14. Orion Exploration Flight Test-l (EFT -1) Absolute Navigation Design

    NASA Technical Reports Server (NTRS)

    Sud, Jastesh; Gay, Robert; Holt, Greg; Zanetti, Renato

    2014-01-01

    Scheduled to launch in September 2014 atop a Delta IV Heavy from the Kennedy Space Center, the Orion Multi-Purpose-Crew-Vehicle (MPCV's) maiden flight dubbed "Exploration Flight Test -1" (EFT-1) intends to stress the system by placing the uncrewed vehicle on a high-energy parabolic trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. Unique challenges associated with designing the navigation system for EFT-1 are presented in the narrative with an emphasis on how redundancy and robustness influenced the architecture. Two Inertial Measurement Units (IMUs), one GPS receiver and three barometric altimeters (BALTs) comprise the navigation sensor suite. The sensor data is multiplexed using conventional integration techniques and the state estimate is refined by the GPS pseudorange and deltarange measurements in an Extended Kalman Filter (EKF) that employs the UDUT decomposition approach. The design is substantiated by simulation results to show the expected performance.

  15. Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior

    NASA Astrophysics Data System (ADS)

    Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique

    2015-09-01

    A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.

  16. Technical Report - FINAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbara Luke, Director, UNLV Engineering Geophysics Laboratory

    2007-04-25

    Improve understanding of the earthquake hazard in the Las Vegas Valley and to assess the state of preparedness of the area's population and structures for the next big earthquake. 1. Enhance the seismic monitoring network in the Las Vegas Valley 2. Improve understanding of deep basin structure through active-source seismic refraction and reflection testing 3. Improve understanding of dynamic response of shallow sediments through seismic testing and correlations with lithology 4. Develop credible earthquake scenarios by laboratory and field studies, literature review and analyses 5. Refine ground motion expectations around the Las Vegas Valley through simulations 6. Assess current buildingmore » standards in light of improved understanding of hazards 7. Perform risk assessment for structures and infrastructures, with emphasis on lifelines and critical structures 8. Encourage and facilitate broad and open technical interchange regarding earthquake safety in southern Nevada and efforts to inform citizens of earthquake hazards and mitigation opportunities« less

  17. Identification of novel Trypanosoma cruzi prolyl oligopeptidase inhibitors by structure-based virtual screening

    NASA Astrophysics Data System (ADS)

    de Almeida, Hugo; Leroux, Vincent; Motta, Flávia Nader; Grellier, Philippe; Maigret, Bernard; Santana, Jaime M.; Bastos, Izabela Marques Dourado

    2016-12-01

    We have previously demonstrated that the secreted prolyl oligopeptidase of Trypanosoma cruzi (POPTc80) is involved in the infection process by facilitating parasite migration through the extracellular matrix. We have built a 3D structural model where POPTc80 is formed by a catalytic α/β-hydrolase domain and a β-propeller domain, and in which the substrate docks at the inter-domain interface, suggesting a "jaw opening" gating access mechanism. This preliminary model was refined by molecular dynamics simulations and next used for a virtual screening campaign, whose predictions were tested by standard binding assays. This strategy was successful as all 13 tested molecules suggested from the in silico calculations were found out to be active POPTc80 inhibitors in the micromolar range (lowest K i at 667 nM). This work paves the way for future development of innovative drugs against Chagas disease.

  18. Verification and Validation Studies for the LAVA CFD Solver

    NASA Technical Reports Server (NTRS)

    Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.

    2013-01-01

    The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.

  19. Refining climate models

    ScienceCinema

    Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel

    2018-02-13

    Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.

  20. Results from a scaled reactor cavity cooling system with water at steady state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisowski, D. D.; Albiston, S. M.; Tokuhiro, A.

    We present a summary of steady-state experiments performed with a scaled, water-cooled Reactor Cavity Cooling System (RCCS) at the Univ. of Wisconsin - Madison. The RCCS concept is used for passive decay heat removal in the Next Generation Nuclear Plant (NGNP) design and was based on open literature of the GA-MHTGR, HTR-10 and AVR reactor. The RCCS is a 1/4 scale model of the full scale prototype system, with a 7.6 m structure housing, a 5 m tall test section, and 1,200 liter water storage tank. Radiant heaters impose a heat flux onto a three riser tube test section, representingmore » a 5 deg. radial sector of the actual 360 deg. RCCS design. The maximum heat flux and power levels are 25 kW/m{sup 2} and 42.5 kW, and can be configured for variable, axial, or radial power profiles to simulate prototypic conditions. Experimental results yielded measurements of local surface temperatures, internal water temperatures, volumetric flow rates, and pressure drop along the test section and into the water storage tank. The majority of the tests achieved a steady state condition while remaining single-phase. A selected number of experiments were allowed to reach saturation and subsequently two-phase flow. RELAP5 simulations with the experimental data have been refined during test facility development and separate effects validation of the experimental facility. This test series represents the completion of our steady-state testing, with future experiments investigating normal and off-normal accident scenarios with two-phase flow effects. The ultimate goal of the project is to combine experimental data from UW - Madison, UI, ANL, and Texas A and M, with system model simulations to ascertain the feasibility of the RCCS as a successful long-term heat removal system during accident scenarios for the NGNP. (authors)« less

  1. SU-E-T-25: Real Time Simulator for Designing Electron Dual Scattering Foil Systems.

    PubMed

    Carver, R; Hogstrom, K; Price, M; Leblanc, J; Harris, G

    2012-06-01

    To create a user friendly, accurate, real time computer simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator should allow for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator consists of an analytical algorithm for calculating electron fluence and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with a refined Moliere formalism for scattering powers. The simulator also estimates central-axis x-ray dose contamination from the dual foil system. Once the geometry of the beamline is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scattering foil material and Gaussian shape (thickness and sigma), and beam energy. The beam profile and x-ray contamination are displayed in real time. The simulator was tuned by comparison of off-axis electron fluence profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV and using present foils on the Elekta radiotherapy accelerator, the simulator profiles agreed to within 2% of MC profiles from within 20 cm of the central axis. The x-ray contamination predictions matched measured data to within 0.6%. The calculation time was approximately 100 ms using a single processor, which allows for real-time variation of foil parameters using sliding bars. A real time dual scattering foil system simulator has been developed. The tool has been useful in a project to redesign an electron dual scattering foil system for one of our radiotherapy accelerators. The simulator has also been useful as an instructional tool for our medical physics graduate students. © 2012 American Association of Physicists in Medicine.

  2. Constructing Cross-Linked Polymer Networks Using Monte Carlo Simulated Annealing Technique for Atomistic Molecular Simulations

    DTIC Science & Technology

    2014-10-01

    the angles and dihedrals that are truly unique will be indicated by the user by editing NewAngleTypesDump and NewDihedralTypesDump. The program ...Atomistic Molecular Simulations 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Robert M Elder, Timothy W Sirk, and...Antechamber program in Assisted Model Building with Energy Refinement (AMBER) Tools to assign partial charges (using the Austin Model 1 [AM1]-bond charge

  3. Topological analysis of nuclear pasta phases

    NASA Astrophysics Data System (ADS)

    Kycia, Radosław A.; Kubis, Sebastian; Wójcik, Włodzimierz

    2017-08-01

    In this article the analysis of the result of numerical simulations of pasta phases using algebraic topology methods is presented. These considerations suggest that some phases can be further split into subphases and therefore should be more refined in numerical simulations. The results presented in this article can also be used to relate the Euler characteristic from numerical simulations to the geometry of the phases. The Betti numbers are used as they provide finer characterization of the phases. It is also shown that different boundary conditions give different outcomes.

  4. Designs and test results for three new rotational sensors

    USGS Publications Warehouse

    Jedlicka, P.; Kozak, J.T.; Evans, J.R.; Hutt, C.R.

    2012-01-01

    We discuss the designs and testing of three rotational seismometer prototypes developed at the Institute of Geophysics, Academy of Sciences (Prague, Czech Republic). Two of these designs consist of a liquid-filled toroidal tube with the liquid as the proof mass and providing damping; we tested the piezoelectric and pressure transduction versions of this torus. The third design is a wheel-shaped solid metal inertial sensor with capacitive sensing and magnetic damping. Our results from testing in Prague and at the Albuquerque Seismological Laboratory of the US Geological Survey of transfer function and cross-axis sensitivities are good enough to justify the refinement and subsequent testing of advanced prototypes. These refinements and new testing are well along.

  5. Designs and test results for three new rotational sensors

    NASA Astrophysics Data System (ADS)

    Jedlička, P.; Kozák, J. T.; Evans, J. R.; Hutt, C. R.

    2012-10-01

    We discuss the designs and testing of three rotational seismometer prototypes developed at the Institute of Geophysics, Academy of Sciences (Prague, Czech Republic). Two of these designs consist of a liquid-filled toroidal tube with the liquid as the proof mass and providing damping; we tested the piezoelectric and pressure transduction versions of this torus. The third design is a wheel-shaped solid metal inertial sensor with capacitive sensing and magnetic damping. Our results from testing in Prague and at the Albuquerque Seismological Laboratory of the US Geological Survey of transfer function and cross-axis sensitivities are good enough to justify the refinement and subsequent testing of advanced prototypes. These refinements and new testing are well along.

  6. Evaluating and Refining High Throughput Tools for Toxicokinetics

    EPA Science Inventory

    This poster summarizes efforts of the Chemical Safety for Sustainability's Rapid Exposure and Dosimetry (RED) team to facilitate the development and refinement of toxicokinetics (TK) tools to be used in conjunction with the high throughput toxicity testing data generated as a par...

  7. Feasibility of a GNSS-Probe for Creating Digital Maps of High Accuracy and Integrity

    NASA Astrophysics Data System (ADS)

    Vartziotis, Dimitris; Poulis, Alkis; Minogiannis, Alexandros; Siozos, Panayiotis; Goudas, Iraklis; Samson, Jaron; Tossaint, Michel

    The “ROADSCANNER” project addresses the need for increased accuracy and integrity Digital Maps (DM) utilizing the latest developments in GNSS, in order to provide the required datasets for novel applications, such as navigation based Safety Applications, Advanced Driver Assistance Systems (ADAS) and Digital Automotive Simulations. The activity covered in the current paper is the feasibility study, preliminary tests, initial product design and development plan for an EGNOS enabled vehicle probe. The vehicle probe will be used for generating high accuracy, high integrity and ADAS compatible digital maps of roads, employing a multiple passes methodology supported by sophisticated refinement algorithms. Furthermore, the vehicle probe will be equipped with pavement scanning and other data fusion equipment, in order to produce 3D road surface models compatible with standards of road-tire simulation applications. The project was assigned to NIKI Ltd under the 1st Call for Ideas in the frame of the ESA - Greece Task Force.

  8. Navier-Stokes Simulation of UH-60A Rotor/Wake Interaction Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2017-01-01

    High-resolution simulations of rotor/vortex-wake interaction for a UH60-A rotor under BVI and dynamic stallconditions were carried out with the OVERFLOW Navier-Stokes code.a. The normal force and pitching moment variation with azimuth angle were in good overall agreementwith flight-test data, similar to other CFD results reported in the literature.b. The wake-grid resolution did not have a significant effect on the rotor-blade airloads. This surprisingresult indicates that a wake grid spacing of (Delta)S=10% ctip is sufficient for engineering airloads predictionfor hover and forward flight. This assumes high-resolution body grids, high-order spatial accuracy, anda hybrid RANS/DDES turbulence model.c. Three-dimensional dynamic stall was found to occur due the presence of blade-tip vortices passing overa rotor blade on the retreating side. This changed the local airfoil angle of attack, causing stall, unlikethe 2D perspective of pure pitch oscillation of the local airfoil section.

  9. Prediction of binding poses to FXR using multi-targeted docking combined with molecular dynamics and enhanced sampling

    NASA Astrophysics Data System (ADS)

    Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär

    2018-01-01

    Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.

  10. Prediction of binding poses to FXR using multi-targeted docking combined with molecular dynamics and enhanced sampling.

    PubMed

    Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär

    2018-01-01

    Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.

  11. Simulation Facilities and Test Beds for Galileo

    NASA Astrophysics Data System (ADS)

    Schlarmann, Bernhard Kl.; Leonard, Arian

    2002-01-01

    Galileo is the European satellite navigation system, financed by the European Space Agency (ESA) and the European Commission (EC). The Galileo System, currently under definition phase, will offer seamless global coverage, providing state-of-the-art positioning and timing services. Galileo services will include a standard service targeted at mass market users, an augmented integrity service, providing integrity warnings when fault occur and Public Regulated Services (ensuring a continuity of service for the public users). Other services are under consideration (SAR and integrated communications). Galileo will be interoperable with GPS, and will be complemented by local elements that will enhance the services for specific local users. In the frame of the Galileo definition phase, several system design and simulation facilities and test beds have been defined and developed for the coming phases of the project, respectively they are currently under development. These are mainly the following tools: Galileo Mission Analysis Simulator to design the Space Segment, especially to support constellation design, deployment and replacement. Galileo Service Volume Simulator to analyse the global performance requirements based on a coverage analysis for different service levels and degrades modes. Galileo System Simulation Facility is a sophisticated end-to-end simulation tool to assess the navigation performances for a complete variety of users under different operating conditions and different modes. Galileo Signal Validation Facility to evaluate signal and message structures for Galileo. Galileo System Test Bed (Version 1) to assess and refine the Orbit Determination &Time Synchronisation and Integrity algorithms, through experiments relying on GPS space infrastructure. This paper presents an overview on the so called "G-Facilities" and describes the use of the different system design tools during the project life cycle in order to design the system with respect to availability, continuity and integrity requirements. It gives more details on two of these system design tools: the Galileo Signal Validation Facility (GSVF) and the Galileo System Simulation Facility (GSSF). It will describe the operational use of these facilities within the complete set of design tools and especially the combined use of GSVF and GSSF will be described. Finally, this paper presents also examples and results obtained with these tools.

  12. Prediction of the Grain-Microstructure Evolution Within a Friction Stir Welding (FSW) Joint via the Use of the Monte Carlo Simulation Method

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Avuthu, V.; Galgalikar, R.; Zhang, Z.

    2015-09-01

    A thermo-mechanical finite element analysis of the friction stir welding (FSW) process is carried out and the evolution of the material state (e.g., temperature, the extent of plastic deformation, etc.) monitored. Subsequently, the finite-element results are used as input to a Monte-Carlo simulation algorithm in order to predict the evolution of the grain microstructure within different weld zones, during the FSW process and the subsequent cooling of the material within the weld to room temperature. To help delineate different weld zones, (a) temperature and deformation fields during the welding process, and during the subsequent cooling, are monitored; and (b) competition between the grain growth (driven by the reduction in the total grain-boundary surface area) and dynamic-recrystallization grain refinement (driven by the replacement of highly deformed material with an effectively "dislocation-free" material) is simulated. The results obtained clearly revealed that different weld zones form as a result of different outcomes of the competition between the grain growth and grain refinement processes.

  13. Formulation and Implementation of Inflow/Outflow Boundary Conditions to Simulate Propulsive Effects

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian

    2018-01-01

    Boundary conditions appropriate for simulating flow entering or exiting the computational domain to mimic propulsion effects have been implemented in an adaptive Cartesian simulation package. A robust iterative algorithm to control mass flow rate through an outflow boundary surface is presented, along with a formulation to explicitly specify mass flow rate through an inflow boundary surface. The boundary conditions have been applied within a mesh adaptation framework based on the method of adjoint-weighted residuals. This allows for proper adaptive mesh refinement when modeling propulsion systems. The new boundary conditions are demonstrated on several notional propulsion systems operating in flow regimes ranging from low subsonic to hypersonic. The examples show that the prescribed boundary state is more properly imposed as the mesh is refined. The mass-flowrate steering algorithm is shown to be an efficient approach in each example. To demonstrate the boundary conditions on a realistic complex aircraft geometry, two of the new boundary conditions are also applied to a modern low-boom supersonic demonstrator design with multiple flow inlets and outlets.

  14. Automated protein structure modeling in CASP9 by I-TASSER pipeline combined with QUARK-based ab initio folding and FG-MD-based structure refinement

    PubMed Central

    Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang

    2011-01-01

    I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036

  15. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  16. Development and evaluation of automatic landing control laws for light wing loading STOL aircraft

    NASA Technical Reports Server (NTRS)

    Feinreich, B.; Degani, O.; Gevaert, G.

    1981-01-01

    Automatic flare and decrab control laws were developed for NASA's experimental Twin Otter. This light wing loading STOL aircraft was equipped with direct lift control (DLC) wing spoilers to enhance flight path control. Automatic landing control laws that made use of the spoilers were developed, evaluated in a simulation and the results compared with these obtained for configurations that did not use DLC. The spoilers produced a significant improvement in performance. A simulation that could be operated faster than real time in order to provide statistical landing data for a large number of landings over a wide spectrum of disturbances in a short time was constructed and used in the evaluation and refinement of control law configurations. A longitudinal control law that had been previously developed and evaluated in flight was also simulated and its performance compared with that of the control laws developed. Runway alignment control laws were also defined, evaluated, and refined to result in a final recommended configuration. Good landing performance, compatible with Category 3 operation into STOL runways, was obtained.

  17. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  18. Model Refinement and Simulation of Groundwater Flow in Clinton, Eaton, and Ingham Counties, Michigan

    USGS Publications Warehouse

    Luukkonen, Carol L.

    2010-01-01

    A groundwater-flow model that was constructed in 1996 of the Saginaw aquifer was refined to better represent the regional hydrologic system in the Tri-County region, which consists of Clinton, Eaton, and Ingham Counties, Michigan. With increasing demand for groundwater, the need to manage withdrawals from the Saginaw aquifer has become more important, and the 1996 model could not adequately address issues of water quality and quantity. An updated model was needed to better address potential effects of drought, locally high water demands, reduction of recharge by impervious surfaces, and issues affecting water quality, such as contaminant sources, on water resources and the selection of pumping rates and locations. The refinement of the groundwater-flow model allows simulations to address these issues of water quantity and quality and provides communities with a tool that will enable them to better plan for expansion and protection of their groundwater-supply systems. Model refinement included representation of the system under steady-state and transient conditions, adjustments to the estimated regional groundwater-recharge rates to account for both temporal and spatial differences, adjustments to the representation and hydraulic characteristics of the glacial deposits and Saginaw Formation, and updates to groundwater-withdrawal rates to reflect changes from the early 1900s to 2005. Simulations included steady-state conditions (in which stresses remained constant and changes in storage were not included) and transient conditions (in which stresses changed in annual and monthly time scales and changes in storage within the system were included). These simulations included investigation of the potential effects of reduced recharge due to impervious areas or to low-rainfall/drought conditions, delineation of contributing areas with recent pumping rates, and optimization of pumping subject to various quantity and quality constraints. Simulation results indicate potential declines in water levels in both the upper glacial aquifer and the upper sandstone bedrock aquifer under steady-state and transient conditions when recharge was reduced by 20 and 50 percent in urban areas. Transient simulations were done to investigate reduced recharge due to low rainfall and increased pumping to meet anticipated future demand with 24 months (2 years) of modified recharge or modified recharge and pumping rates. During these two simulation years, monthly recharge rates were reduced by about 30 percent, and monthly withdrawal rates for Lansing area production wells were increased by 15 percent. The reduction in the amount of water available to recharge the groundwater system affects the upper model layers representing the glacial aquifers more than the deeper bedrock layers. However, with a reduction in recharge and an increase in withdrawals from the bedrock aquifer, water levels in the bedrock layers are affected more than those in the glacial layers. Differences in water levels between simulations with reduced recharge and reduced recharge with increased pumping are greatest in the Lansing area and least away from pumping centers, as expected. Additionally, the increases in pumping rates had minimal effect on most simulated streamflows. Additional simulations included updating the estimated 10-year wellhead-contributing areas for selected Lansing-area wells under 2006-7 pumping conditions. Optimization of groundwater withdrawals with a water-resource management model was done to determine withdrawal rates while minimizing operational costs and to determine withdrawal locations to achieve additional capacity while meeting specified head constraints. In these optimization scenarios, the desired groundwater withdrawals are achieved by simulating managed wells (where pumping rates can be optimized) and unmanaged wells (where pumping rates are not optimized) and by using various combinations of existing and proposed well locations.

  19. Space-Based Telescopes for the Actionable Refinement of Ephemeris Systems and Test Engineering

    DTIC Science & Technology

    2011-12-01

    Space Surveillance Network STARE Space-based Telescopes for the Actionable Refinement of Ephemeris STK Satellite Toolkit SV Space Vehicle TAMU...vacuum bake out and visual inspection. Additionally, it is prescribed that these tests be performed in accordance with GSFC-STD-7000, more commonly...environment that a FV will see in orbit. Tools such as Solid Works and NX-Ideas can be used to build CAD models to visually validate engineering

  20. NMR data-driven structure determination using NMR-I-TASSER in the CASD-NMR experiment

    PubMed Central

    Jang, Richard; Wang, Yan

    2015-01-01

    NMR-I-TASSER, an adaption of the I-TASSER algorithm combining NMR data for protein structure determination, recently joined the second round of the CASD-NMR experiment. Unlike many molecular dynamics-based methods, NMR-I-TASSER takes a molecular replacement-like approach to the problem by first threading the target through the PDB to identify structural templates which are then used for iterative NOE assignments and fragment structure assembly refinements. The employment of multiple templates allows NMR-I-TASSER to sample different topologies while convergence to a single structure is not required. Retroactive and blind tests of the CASD-NMR targets from Rounds 1 and 2 demonstrate that even without using NOE peak lists I-TASSER can generate correct structure topology with 15 of 20 targets having a TM-score above 0.5. With the addition of NOE-based distance restraints, NMR-I-TASSER significantly improved the I-TASSER models with all models having the TM-score above 0.5. The average RMSD was reduced from 5.29 to 2.14 Å in Round 1 and 3.18 to 1.71 Å in Round 2. There is no obvious difference in the modeling results with using raw and refined peak lists, indicating robustness of the pipeline to the NOE assignment errors. Overall, despite the low-resolution modeling the current NMR-I-TASSER pipeline provides a coarse-grained structure folding approach complementary to traditional molecular dynamics simulations, which can produce fast near-native frameworks for atomic-level structural refinement. PMID:25737244

  1. Reconstruction of three-dimensional grain structure in polycrystalline iron via an interactive segmentation method

    NASA Astrophysics Data System (ADS)

    Feng, Min-nan; Wang, Yu-cong; Wang, Hao; Liu, Guo-quan; Xue, Wei-hua

    2017-03-01

    Using a total of 297 segmented sections, we reconstructed the three-dimensional (3D) structure of pure iron and obtained the largest dataset of 16254 3D complete grains reported to date. The mean values of equivalent sphere radius and face number of pure iron were observed to be consistent with those of Monte Carlo simulated grains, phase-field simulated grains, Ti-alloy grains, and Ni-based super alloy grains. In this work, by finding a balance between automatic methods and manual refinement, we developed an interactive segmentation method to segment serial sections accurately in the reconstruction of the 3D microstructure; this approach can save time as well as substantially eliminate errors. The segmentation process comprises four operations: image preprocessing, breakpoint detection based on mathematical morphology analysis, optimized automatic connection of the breakpoints, and manual refinement by artificial evaluation.

  2. Bridging the Gap Between Validation and Implementation of Non-Animal Veterinary Vaccine Potency Testing Methods.

    PubMed

    Dozier, Samantha; Brown, Jeffrey; Currie, Alistair

    2011-11-29

    In recent years, technologically advanced high-throughput techniques have been developed that replace, reduce or refine animal use in vaccine quality control tests. Following validation, these tests are slowly being accepted for use by international regulatory authorities. Because regulatory acceptance itself has not guaranteed that approved humane methods are adopted by manufacturers, various organizations have sought to foster the preferential use of validated non-animal methods by interfacing with industry and regulatory authorities. After noticing this gap between regulation and uptake by industry, we began developing a paradigm that seeks to narrow the gap and quicken implementation of new replacement, refinement or reduction guidance. A systematic analysis of our experience in promoting the transparent implementation of validated non-animal vaccine potency assays has led to the refinement of our paradigmatic process, presented here, by which interested parties can assess the local regulatory acceptance of methods that reduce animal use and integrate them into quality control testing protocols, or ensure the elimination of peripheral barriers to their use, particularly for potency and other tests carried out on production batches.

  3. An adaptive multiblock high-order finite-volume method for solving the shallow-water equations on the sphere

    DOE PAGES

    McCorquodale, Peter; Ullrich, Paul; Johansen, Hans; ...

    2015-09-04

    We present a high-order finite-volume approach for solving the shallow-water equations on the sphere, using multiblock grids on the cubed-sphere. This approach combines a Runge--Kutta time discretization with a fourth-order accurate spatial discretization, and includes adaptive mesh refinement and refinement in time. Results of tests show fourth-order convergence for the shallow-water equations as well as for advection in a highly deformational flow. Hierarchical adaptive mesh refinement allows solution error to be achieved that is comparable to that obtained with uniform resolution of the most refined level of the hierarchy, but with many fewer operations.

  4. Investigation of different simulation approaches on a high-head Francis turbine and comparison with model test data: Francis-99

    NASA Astrophysics Data System (ADS)

    Mössinger, Peter; Jester-Zürker, Roland; Jung, Alexander

    2015-01-01

    Numerical investigations of hydraulic turbo machines under steady-state conditions are state of the art in current product development processes. Nevertheless allow increasing computational resources refined discretization methods, more sophisticated turbulence models and therefore better predictions of results as well as the quantification of existing uncertainties. Single stage investigations are done using in-house tools for meshing and set-up procedure. Beside different model domains and a mesh study to reduce mesh dependencies, the variation of several eddy viscosity and Reynolds stress turbulence models are investigated. All obtained results are compared with available model test data. In addition to global values, measured magnitudes in the vaneless space, at runner blade and draft tube positions in term of pressure and velocity are considered. From there it is possible to estimate the influence and relevance of various model domains depending on different operating points and numerical variations. Good agreement can be found for pressure and velocity measurements with all model configurations and, except the BSL-RSM model, all turbulence models. At part load, deviations in hydraulic efficiency are at a large magnitude, whereas at best efficiency and high load operating point efficiencies are close to the measurement. A consideration of the runner side gap geometry as well as a refined mesh is able to improve the results either in relation to hydraulic efficiency or velocity distribution with the drawbacks of less stable numerics and increasing computational time.

  5. Simulations Meet Experiment to Reveal New Insights into DNA Intrinsic Mechanics

    PubMed Central

    Ben Imeddourene, Akli; Elbahnsi, Ahmad; Guéroult, Marc; Oguey, Christophe; Foloppe, Nicolas; Hartmann, Brigitte

    2015-01-01

    The accurate prediction of the structure and dynamics of DNA remains a major challenge in computational biology due to the dearth of precise experimental information on DNA free in solution and limitations in the DNA force-fields underpinning the simulations. A new generation of force-fields has been developed to better represent the sequence-dependent B-DNA intrinsic mechanics, in particular with respect to the BI ↔ BII backbone equilibrium, which is essential to understand the B-DNA properties. Here, the performance of MD simulations with the newly updated force-fields Parmbsc0εζOLI and CHARMM36 was tested against a large ensemble of recent NMR data collected on four DNA dodecamers involved in nucleosome positioning. We find impressive progress towards a coherent, realistic representation of B-DNA in solution, despite residual shortcomings. This improved representation allows new and deeper interpretation of the experimental observables, including regarding the behavior of facing phosphate groups in complementary dinucleotides, and their modulation by the sequence. It also provides the opportunity to extensively revisit and refine the coupling between backbone states and inter base pair parameters, which emerges as a common theme across all the complementary dinucleotides. In sum, the global agreement between simulations and experiment reveals new aspects of intrinsic DNA mechanics, a key component of DNA-protein recognition. PMID:26657165

  6. A consistent and conservative scheme for MHD flows with complex boundaries on an unstructured Cartesian adaptive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Ni, Ming-Jiu, E-mail: mjni@ucas.ac.cn

    2014-01-01

    The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interfacemore » that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.« less

  7. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  8. Near-surface wind variability over the broader Adriatic region: insights from an ensemble of regional climate models

    NASA Astrophysics Data System (ADS)

    Belušić, Andreina; Prtenjak, Maja Telišman; Güttler, Ivan; Ban, Nikolina; Leutwyler, David; Schär, Christoph

    2018-06-01

    Over the past few decades the horizontal resolution of regional climate models (RCMs) has steadily increased, leading to a better representation of small-scale topographic features and more details in simulating dynamical aspects, especially in coastal regions and over complex terrain. Due to its complex terrain, the broader Adriatic region represents a major challenge to state-of-the-art RCMs in simulating local wind systems realistically. The objective of this study is to identify the added value in near-surface wind due to the refined grid spacing of RCMs. For this purpose, we use a multi-model ensemble composed of CORDEX regional climate simulations at 0.11° and 0.44° grid spacing, forced by the ERA-Interim reanalysis, a COSMO convection-parameterizing simulation at 0.11° and a COSMO convection-resolving simulation at 0.02° grid spacing. Surface station observations from this region and satellite QuikSCAT data over the Adriatic Sea have been compared against daily output obtained from the available simulations. Both day-to-day wind and its frequency distribution are examined. The results indicate that the 0.44° RCMs rarely outperform ERA-Interim reanalysis, while the performance of the high-resolution simulations surpasses that of ERA-Interim. We also disclose that refining the grid spacing to a few km is needed to properly capture the small-scale wind systems. Finally, we show that the simulations frequently yield the accurate angle of local wind regimes, such as for the Bora flow, but overestimate the associated wind magnitude. Finally, spectral analysis shows good agreement between measurements and simulations, indicating the correct temporal variability of the wind speed.

  9. AIR EMISSIONS FROM COMBUSTION OF SOLVENT REFINED COAL

    EPA Science Inventory

    The report gives details of a Solvent Refined Coal (SRC) combustion test at Georgia Power Company's Plant Mitchell, March, May, and June 1977. Flue gas samples were collected for modified EPA Level 1 analysis; analytical results are reported. Air emissions from the combustion of ...

  10. REFINEMENT OF A MODEL TO PREDICT THE PERMEATION OF PROTECTIVE CLOTHING MATERIALS

    EPA Science Inventory

    A prototype of a predictive model for estimating chemical permeation through protective clothing materials was refined and tested. he model applies Fickian diffusion theory and predicts permeation rates and cumulative permeation as a function of time for five materials: butyl rub...

  11. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level. © 2011 IEEE

  12. Simulation of Fusion Plasmas

    ScienceCinema

    Holland, Chris [UC San Diego, San Diego, California, United States

    2017-12-09

    The upcoming ITER experiment (www.iter.org) represents the next major milestone in realizing the promise of using nuclear fusion as a commercial energy source, by moving into the “burning plasma” regime where the dominant heat source is the internal fusion reactions. As part of its support for the ITER mission, the US fusion community is actively developing validated predictive models of the behavior of magnetically confined plasmas. In this talk, I will describe how the plasma community is using the latest high performance computing facilities to develop and refine our models of the nonlinear, multiscale plasma dynamics, and how recent advances in experimental diagnostics are allowing us to directly test and validate these models at an unprecedented level.

  13. Evidence from mathematical modeling that carbonic anhydrase II and IV enhance CO2 fluxes across Xenopus oocyte plasma membranes

    PubMed Central

    Musa-Aziz, Raif; Boron, Walter F.

    2014-01-01

    Exposing an oocyte to CO2/HCO3− causes intracellular pH (pHi) to decline and extracellular-surface pH (pHS) to rise to a peak and decay. The two companion papers showed that oocytes injected with cytosolic carbonic anhydrase II (CA II) or expressing surface CA IV exhibit increased maximal rate of pHi change (dpHi/dt)max, increased maximal pHS changes (ΔpHS), and decreased time constants for pHi decline and pHS decay. Here we investigate these results using refinements of an earlier mathematical model of CO2 influx into a spherical cell. Refinements include 1) reduced cytosolic water content, 2) reduced cytosolic diffusion constants, 3) refined CA II activity, 4) layer of intracellular vesicles, 5) reduced membrane CO2 permeability, 6) microvilli, 7) refined CA IV activity, 8) a vitelline membrane, and 9) a new simulation protocol for delivering and removing the bulk extracellular CO2/HCO3− solution. We show how these features affect the simulated pHi and pHS transients and use the refined model with the experimental data for 1.5% CO2/10 mM HCO3− (pHo = 7.5) to find parameter values that approximate ΔpHS, the time to peak pHS, the time delay to the start of the pHi change, (dpHi/dt)max, and the change in steady-state pHi. We validate the revised model against data collected as we vary levels of CO2/HCO3− or of extracellular HEPES buffer. The model confirms the hypothesis that CA II and CA IV enhance transmembrane CO2 fluxes by maximizing CO2 gradients across the plasma membrane, and it predicts that the pH effects of simultaneously implementing intracellular and extracellular-surface CA are supra-additive. PMID:24965589

  14. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    PubMed

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift.

  15. Naturalistic Decision Making in Power Grid Operations: Implications for Dispatcher Training and Usability Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greitzer, Frank L.; Podmore, Robin

    2008-11-17

    The focus of the present study is on improved training approaches to accelerate learning and improved methods for analyzing effectiveness of tools within a high-fidelity power grid simulated environment. A theory-based model has been developed to document and understand the mental processes that an expert power system operator uses when making critical decisions. The theoretical foundation for the method is based on the concepts of situation awareness, the methods of cognitive task analysis, and the naturalistic decision making (NDM) approach of Recognition Primed Decision Making. The method has been systematically explored and refined as part of a capability demonstration ofmore » a high-fidelity real-time power system simulator under normal and emergency conditions. To examine NDM processes, we analyzed transcripts of operator-to-operator conversations during the simulated scenario to reveal and assess NDM-based performance criteria. The results of the analysis indicate that the proposed framework can be used constructively to map or assess the Situation Awareness Level of the operators at each point in the scenario. We can also identify the mental models and mental simulations that the operators employ at different points in the scenario. This report documents the method, describes elements of the model, and provides appendices that document the simulation scenario and the associated mental models used by operators in the scenario.« less

  16. A high-order multi-zone cut-stencil method for numerical simulations of high-speed flows over complex geometries

    NASA Astrophysics Data System (ADS)

    Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John

    2016-07-01

    In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.

  17. The Role of Simulation in Planning Intraoperative Magnetic Resonance Imaging-Guided Neurosurgical Procedures: A Case Report.

    PubMed

    Chowdhury, Tumul; Bergese, Sergio D; Soghomonyan, Suren; Cappellani, Ronald B

    2017-04-01

    Simulation of the actual procedure is a simple and yet effective method of increasing patient safety and reducing the rate of unexpected adverse effects. We present our experience with 2 cases of preprocedural simulation on healthy volunteers that were performed in the intraoperative magnetic resonance imaging suite. During one of the cases, we also simulated a scenario of sudden cardiac arrest. Such an approach helped us to refine the procedures and coordinate the work of different teams within the intraoperative magnetic resonance imaging suite as well as improve the quality of patient management.

  18. Parallel Cartesian grid refinement for 3D complex flow simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2013-11-01

    A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.

  19. Electron Beam Melting and Refining of Metals: Computational Modeling and Optimization

    PubMed Central

    Vutova, Katia; Donchev, Veliko

    2013-01-01

    Computational modeling offers an opportunity for a better understanding and investigation of thermal transfer mechanisms. It can be used for the optimization of the electron beam melting process and for obtaining new materials with improved characteristics that have many applications in the power industry, medicine, instrument engineering, electronics, etc. A time-dependent 3D axis-symmetrical heat model for simulation of thermal transfer in metal ingots solidified in a water-cooled crucible at electron beam melting and refining (EBMR) is developed. The model predicts the change in the temperature field in the casting ingot during the interaction of the beam with the material. A modified Pismen-Rekford numerical scheme to discretize the analytical model is developed. These equation systems, describing the thermal processes and main characteristics of the developed numerical method, are presented. In order to optimize the technological regimes, different criteria for better refinement and obtaining dendrite crystal structures are proposed. Analytical problems of mathematical optimization are formulated, discretized and heuristically solved by cluster methods. Using important for the practice simulation results, suggestions can be made for EBMR technology optimization. The proposed tool is important and useful for studying, control, optimization of EBMR process parameters and improving of the quality of the newly produced materials. PMID:28788351

  20. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  1. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  2. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  3. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  4. Fit of cast commercially pure titanium and Ti-6Al-4V alloy crowns before and after marginal refinement by electrical discharge machining.

    PubMed

    Contreras, Edwin Fernando Ruiz; Henriques, Guilherme Elias Pessanha; Giolo, Suely Ruiz; Nobilo, Mauro Antonio Arruda

    2002-11-01

    Titanium has been suggested as a replacement for alloys currently used in single-tooth restorations and fixed partial dentures. However, difficulties in casting have resulted in incomplete margins and discrepancies in marginal fit. This study evaluated and compared the marginal fit of crowns fabricated from a commercially pure titanium (CP Ti) and from Ti-6Al-4V alloy with crowns fabricated from a Pd-Ag alloy that served as a control. Evaluations were performed before and after marginal refinement by electrical discharge machining (EDM). Forty-five bovine teeth were prepared to receive complete cast crowns. Stone and copper-plated dies were obtained from impressions. Fifteen crowns were cast with each alloy (CP Ti, Ti-6Al-4V, and Pd-Ag). Marginal fit measurements (in micrometers) were recorded at 4 reference points on each casting with a traveling microscope. Marginal refinement with EDM was conducted on the titanium-based crowns, and measurements were repeated. Data were analyzed with the Kruskal-Wallis test, paired t test, and independent t test at a 1% probability level. The Kruskal-Wallis test showed significant differences among mean values of marginal fit for the as-cast CP Ti crowns (mean [SD], 83.9 [26.1] microm) and the other groups: Ti-6Al-4V (50.8 [17.2] microm) and Pd-Ag (45.2 [10.4] microm). After EDM marginal refinement, significant differences were detected among the Ti-6Al-4V crowns (24.5 [10.9] microm) and the other 2 groups: CP Ti (50.6 [20.0] microm) and Pd-Ag (not modified by EDM). Paired t test results indicated that marginal refinement with EDM effectively improved the fit of CP Ti crowns (from 83.9 to 50.6 microm) and Ti-6Al-4V crowns (from 50.8 to 24.5 microm). However, the difference in improvement between the two groups was not significant by t test. Within the limitations of this study, despite the superior results for Ti-6Al-4V, both groups of titanium-based crowns had clinically acceptable marginal fits. After EDM marginal refinement, the fit of cast CP Ti and Ti-6Al-4V crowns improved significantly.

  5. Internship Abstract - Aerosciences and Flight Mechanics Intern

    NASA Technical Reports Server (NTRS)

    Rangel, John

    2015-01-01

    Mars is a hard place to land on, but my internship with NASA's Aerosciences & Flight Mechanics branch has shown me the ways in which men and women will one day land safely. I work on Mars Aerocapture, an aeroassist maneuver that reduces the fuel necessary to "capture" into Martian orbit before a descent. The spacecraft flies through the Martian atmosphere to lose energy through heating before it exits back into space, this time at a slower velocity and in orbit around Mars. Spacecraft will need to maneuver through the Martian atmosphere to accurately hit their orbit, and they will need to survive the generated heat. Engineering teams need simulation data to continue their designs, and the guidance algorithm that ensures a proper orbit insertion needs to be refined - two jobs that fell to me at the summer's start. Engineers within my branch have developed two concept aerocapture vehicles, and I run simulations on their behavior during the maneuver. I also test and refine the guidance algorithm. I spent the first few weeks familiarizing myself with the simulation software, troubleshooting various guidance bugs and writing code. Everything runs smoothly now, and I recently sent my first set of trajectory data to a Thermal Protection System group so they can incorporate it into their heat-bearing material designs. I hope to generate plenty of data in the next few weeks for various engineering groups before my internship ends mid-August. My major accomplishment so far is improving the guidance algorithm. It is a relatively new algorithm that promises higher accuracy and fuel efficiency, but it hasn't undergone extensive testing yet. I've had the opportunity to work with the principal developer - a professor at Iowa State University - to find and fix several issues. I was also assigned the task of expanding the branch's aerodynamic heating simulation software. I am excited to do this because engineers in the future will use my work to generate meaningful data and make design decisions. My internship has taught me to how to teach myself. There are no tutors, study sessions or professor office hours. When I am given an assignment I am expected to figure out how to accomplish it, and I have grown in my problem solving abilities. My summer experience has reinforced my drive to work at NASA, and I can definitely see myself working full time on the aerocapture project, or something similar.

  6. What we call what we do affects how we do it: a new nomenclature for simulation research in medical education.

    PubMed

    Haji, Faizal A; Hoppe, Daniel J; Morin, Marie-Paule; Giannoulakis, Konstantine; Koh, Jansen; Rojas, David; Cheung, Jeffrey J H

    2014-05-01

    Rapid technological advances and concern for patient safety have increased the focus on simulation as a pedagogical tool for educating health care providers. To date, simulation research scholarship has focused on two areas; evaluating instructional designs of simulation programs, and the integration of simulation into a broader educational context. However, these two categories of research currently exist under a single label-Simulation-Based Medical Education. In this paper we argue that introducing a more refined nomenclature within which to frame simulation research is necessary for researchers, to appropriately design research studies and describe their findings, and for end-point users (such as program directors and educators), to more appropriately understand and utilize this evidence.

  7. Comparative Field Tests of Pressurised Rover Prototypes

    NASA Astrophysics Data System (ADS)

    Mann, G. A.; Wood, N. B.; Clarke, J. D.; Piechochinski, S.; Bamsey, M.; Laing, J. H.

    The conceptual designs, interior layouts and operational performances of three pressurised rover prototypes - Aonia, ARES and Everest - were field tested during a recent simulation at the Mars Desert Research Station in Utah. A human factors experiment, in which the same crew of three executed the same simulated science mission in each of the three vehicles, yielded comparative data on the capacity of each vehicle to safely and comfortably carry explorers away from the main base, enter and exit the vehicle in spacesuits, perform science tasks in the field, and manage geological and biological samples. As well as offering recommendations for design improvements for specific vehicles, the results suggest that a conventional Sports Utility Vehicle (SUV) would not be suitable for analog field work; that a pressurised docking tunnel to the main habitat is essential; that better provisions for spacesuit storage are required; and that a crew consisting of one driver/navigator and two field science crew specialists may be optimal. From a field operations viewpoint, a recurring conflict between rover and habitat crews at the time of return to the habitat was observed. An analysis of these incidents leads to proposed refinements of operational protocols, specific crew training for rover returns and again points to the need for a pressurised docking tunnel. Sound field testing, circulating of results, and building the lessons learned into new vehicles is advocated as a way of producing ever higher fidelity rover analogues.

  8. 3D registration of surfaces for change detection in medical images

    NASA Astrophysics Data System (ADS)

    Fisher, Elizabeth; van der Stelt, Paul F.; Dunn, Stanley M.

    1997-04-01

    Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.

  9. Assessment of crew operations during internal servicing of the Columbus Free-Flyer by Hermes

    NASA Astrophysics Data System (ADS)

    Winisdoerffer, F.; Lamothe, A.; Bourdeau'hui, J. C.

    The Hermes system has been adopted as a European programme at the Hague ministerial level meeting. The primary mission of the Hermes spaceplane will be the servicing of the Columbus Free-Flyer (CFF) in order to bring new experiments in orbit, recover the results of old ones, and refurbish/maintain the various subsystems. This mission will be based on the extensive use of the 3 crewmembers on-board Hermes in order to perform either the Intra-Vehicular (IVA) and/or the Extra-Vehicular (EVA) activities. This paper focuses on the internal operations and the dimensions of the various payload of the basic reference cargo set are presented. The main constraints associated with their manipulation are also assessed independently of the configuration. During the spaceplane definition process, various configurations were developed. The operations were simulated using the CAD CATIA software with representative anthropometric models of the potential Hermes users population. These simulations helped to assess the various configurations and to refine the general concept of the spaceplane. The geometrical feasibility is demonstrated through those simulations. However full-scale tests are required to confirm data and assess the duration of the operations.

  10. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  11. Revisions to the 1995 map of ecological subregions that affect users of the southern variant of the Forest Vegetation Simulator

    Treesearch

    W. Henry McNab; Chad E. Keyser

    2011-01-01

    The Southern Variant of the Forest Vegetation Simulator utilizes ecological units mapped in 1995 by the Forest Service, U.S. Department of Agriculture, to refine tree growth models for the Southern United States. The 2007 revision of the 1995 map resulted in changes of identification and boundary delineation for some ecoregion units. In this report, we summarize the...

  12. Rough mill simulator version 3.0: an analysis tool for refining rough mill operations

    Treesearch

    Edward Thomas; Joel Weiss

    2006-01-01

    ROMI-3 is a rough mill computer simulation package designed to be used by both rip-first and chop-first rough mill operators and researchers. ROMI-3 allows users to model and examine the complex relationships among cutting bill, lumber grade mix, processing options, and their impact on rough mill yield and efficiency. Integrated into the ROMI-3 software is a new least-...

  13. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  14. On the impact of a refined stochastic model for airborne LiDAR measurements

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig

    2016-09-01

    Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.

  15. Efficient Unstructured Cartesian/Immersed-Boundary Method with Local Mesh Refinement to Simulate Flows in Complex 3D Geometries

    NASA Astrophysics Data System (ADS)

    de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit

    2008-11-01

    Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.

  16. Modeling of the Coupling of Microstructure and Macrosegregation in a Direct Chill Cast Al-Cu Billet

    NASA Astrophysics Data System (ADS)

    Heyvaert, Laurent; Bedel, Marie; Založnik, Miha; Combeau, Hervé

    2017-10-01

    The macroscopic multiphase flow and the growth of the solidification microstructures in the mushy zone of a direct chill (DC) casting are closely coupled. These couplings are the key to the understanding of the formation of the macrosegregation and of the non-uniform microstructure of the casting. In the present paper we use a multiphase and multiscale model to provide a fully coupled picture of the links between macrosegregation and microstructure in a DC cast billet. The model describes nucleation from inoculant particles and growth of dendritic and globular equiaxed crystal grains, fully coupled with macroscopic transport phenomena: fluid flow induced by natural convection and solidification shrinkage, heat, mass, and solute mass transport, motion of free-floating equiaxed grains, and of grain refiner particles. We compare our simulations to experiments on grain-refined and non-grain-refined industrial size billets from literature. We show that a transition between dendritic and globular grain morphology triggered by the grain refinement is the key to the explanation of the differences between the macrosegregation patterns in the two billets. We further show that the grain size and morphology are strongly affected by the macroscopic transport of free-floating equiaxed grains and of grain refiner particles.

  17. A PFC3D-based numerical simulation of cutting load for lunar rock simulant and experimental validation

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jiang, Shengyuan; Tang, Dewei; Xu, Bo

    2017-05-01

    For sake of striking a balance between the need of drilling efficiency and the constrains of power budget on the moon, the penetrations per revolution of drill bit are generally limited in the range around 0.1 mm, and besides the geometric angle of the cutting blade need to be well designed. This paper introduces a simulation approach based on PFC3D (particle flow code 3 dimensions) for analyzing the cutting load feature on lunar rock simulant, which is derived from different geometric-angle blades with a small cutting depth. The mean values of the cutting force of five blades in the survey region (four on the boundary points and one on the center point) are selected as the macroscopic responses of model. The method of experimental design which includes Plackett-Burman (PB) design and central composite design (CCD) method is adopted in the matching procedure of microparameters in PFC model. Using the optimization method of enumeration, the optimum set of microparameters is acquired. Then, the experimental validation is implemented by using other twenty-five blades with different geometric angles, and the results from both simulations and laboratory tests give fair agreements. Additionally, the rock breaking process cut by different blades are quantified from simulation analysis. This research provides the theoretical support for the refinement of the rock cutting load prediction and the geometric design of cutting blade on the drill bit.

  18. Deposition and Characterization of Thin Films on Metallic Substrates

    NASA Technical Reports Server (NTRS)

    Gatica, Jorge E.

    2005-01-01

    A CVD method was successfully developed to produce conversion coatings on aluminum alloys surfaces with reproducible results with a variety of precursors. A well defined protocol to prepare the precursor solutions formulated in a previous research was extended to other additives. It was demonstrated that solutions prepared following such a protocol could be used to systematically generate protective coatings onto aluminum surfaces. Experiments with a variety of formulations revealed that a refined deposition protocol yields reproducible conversion coatings of controlled composition. A preliminary correlation between solution formulations and successful precursors was derived. Coatings were tested for adhesion properties enhancement for commercial paints. A standard testing method was followed and clear trends were identified. Only one precursors was tested systematically. Anticipated work on other precursors should allow a better characterization of the effect of intermetallics on the production of conversion/protective coatings on metals and ceramics. The significance of this work was the practical demonstration that chemical vapor deposition (CVD) techniques can be used to systematically generate protective/conversion coating on non-ferrous surfaces. In order to become an effective approach to replace chromate-based pre- treatment processes, namely in the aerospace or automobile industry, the process parameters must be defined more precisely. Moreover, the feasibility of scale-up designs necessitates a more comprehensive characterization of the fluid flow, transport phenomena, and chemical kinetics interacting in the process. Kinetic characterization showed a significantly different effect of magnesium-based precursors when compared to iron-based precursors. Future work will concentrate on refining the process through computer simulations and further experimental studies on the effect of other transition metals to induce deposition of conversion/protective films on aluminum and other metallic substrates.

  19. Sustainable design guidelines to support the Washington State ferries terminal design manual : design guideline application and refinement.

    DOT National Transportation Integrated Search

    2013-08-01

    The Sustainable Design Guidelines were developed in Phase I of this research program (WA-RD : 816.1). Here we are reporting on the Phase II effort that beta-tested the Phase I Guidelines on : example ferry terminal designs and refinements made ...

  20. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...

Top