Sample records for refinement code enzo

  1. Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.

    2005-09-01

    We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in the same quantities for gas of lower overdensity. For example, at z=3 the fraction of cosmic gas that has temperature logT>0.5 is ~80% for both Enzo ZEUS and GADGET, while it is 40%-60% for Enzo PPM. We argue that these discrepancies are due to differences in the shock-capturing abilities of the different methods. In particular, we find that the ZEUS implementation of artificial viscosity in Enzo leads to some unphysical heating at early times in preshock regions. While this is apparently a significantly weaker effect in GADGET, its use of an artificial viscosity technique may also make it prone to some excess generation of entropy that should be absent in Enzo PPM. Overall, the hydrodynamical results for GADGET are bracketed by those for Enzo ZEUS and Enzo PPM but are closer to Enzo ZEUS.

  2. Visualization of AMR data with multi-level dual-mesh interpolation.

    PubMed

    Moran, Patrick J; Ellsworth, David

    2011-12-01

    We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level. © 2011 IEEE

  3. Figuring Out Gas in Galaxies In Enzo (FOGGIE): Resolving the Inner Circumgalactic Medium

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren; Peeples, Molly; Tumlinson, Jason; O'Shea, Brian; Smith, Britton

    2018-01-01

    Cosmological hydrodynamical simulations using every common numerical method have struggled to reproduce the multiphase nature of the circumgalactic medium (CGM) revealed by recent observations. However, to date, resolution in these simulations has been aimed at dense regions — the galactic disk and in-falling satellites — while the diffuse CGM never reaches comparable levels of refinement. Taking advantage of the flexible grid structure of the adaptive mesh refinement code Enzo, we force refinement in a region of the CGM of a Milky Way-like galaxy to the same spatial resolution as that of the disk. In this talk, I will present how the physical and structural distributions of the circumgalactic gas change dramatically as a function of the resolution alone. I will also show the implications these changes have for the observational properties of the gas in the context of the observations.

  4. Direct collapse to supermassive black hole seeds: comparing the AMR and SPH approaches.

    PubMed

    Luo, Yang; Nagamine, Kentaro; Shlosman, Isaac

    2016-07-01

    We provide detailed comparison between the adaptive mesh refinement (AMR) code enzo-2.4 and the smoothed particle hydrodynamics (SPH)/ N -body code gadget-3 in the context of isolated or cosmological direct baryonic collapse within dark matter (DM) haloes to form supermassive black holes. Gas flow is examined by following evolution of basic parameters of accretion flows. Both codes show an overall agreement in the general features of the collapse; however, many subtle differences exist. For isolated models, the codes increase their spatial and mass resolutions at different pace, which leads to substantially earlier collapse in SPH than in AMR cases due to higher gravitational resolution in gadget-3. In cosmological runs, the AMR develops a slightly higher baryonic resolution than SPH during halo growth via cold accretion permeated by mergers. Still, both codes agree in the build-up of DM and baryonic structures. However, with the onset of collapse, this difference in mass and spatial resolution is amplified, so evolution of SPH models begins to lag behind. Such a delay can have effect on formation/destruction rate of H 2 due to UV background, and on basic properties of host haloes. Finally, isolated non-cosmological models in spinning haloes, with spin parameter λ ∼ 0.01-0.07, show delayed collapse for greater λ, but pace of this increase is faster for AMR. Within our simulation set-up, gadget-3 requires significantly larger computational resources than enzo-2.4 during collapse, and needs similar resources, during the pre-collapse, cosmological structure formation phase. Yet it benefits from substantially higher gravitational force and hydrodynamic resolutions, except at the end of collapse.

  5. Direct collapse to supermassive black hole seeds: comparing the AMR and SPH approaches

    NASA Astrophysics Data System (ADS)

    Luo, Yang; Nagamine, Kentaro; Shlosman, Isaac

    2016-07-01

    We provide detailed comparison between the adaptive mesh refinement (AMR) code ENZO-2.4 and the smoothed particle hydrodynamics (SPH)/N-body code GADGET-3 in the context of isolated or cosmological direct baryonic collapse within dark matter (DM) haloes to form supermassive black holes. Gas flow is examined by following evolution of basic parameters of accretion flows. Both codes show an overall agreement in the general features of the collapse; however, many subtle differences exist. For isolated models, the codes increase their spatial and mass resolutions at different pace, which leads to substantially earlier collapse in SPH than in AMR cases due to higher gravitational resolution in GADGET-3. In cosmological runs, the AMR develops a slightly higher baryonic resolution than SPH during halo growth via cold accretion permeated by mergers. Still, both codes agree in the build-up of DM and baryonic structures. However, with the onset of collapse, this difference in mass and spatial resolution is amplified, so evolution of SPH models begins to lag behind. Such a delay can have effect on formation/destruction rate of H2 due to UV background, and on basic properties of host haloes. Finally, isolated non-cosmological models in spinning haloes, with spin parameter λ ˜ 0.01-0.07, show delayed collapse for greater λ, but pace of this increase is faster for AMR. Within our simulation set-up, GADGET-3 requires significantly larger computational resources than ENZO-2.4 during collapse, and needs similar resources, during the pre-collapse, cosmological structure formation phase. Yet it benefits from substantially higher gravitational force and hydrodynamic resolutions, except at the end of collapse.

  6. Building Task-Oriented Applications: An Introduction to the Legion Programming Paradigm

    DTIC Science & Technology

    2015-02-01

    These domain definitions are validated prior to execution and represent logical regions that each task can access and manipulate as per the dictates of...Introducing Enzo, an AMR cosmology application, in adaptive mesh refinement - theory and applications. Chicago (IL): Springer Berlin Heidelberg; c2005. p

  7. Ram Pressure Stripping of Galaxy JO201

    NASA Astrophysics Data System (ADS)

    Zhong, Greta; Tonnesen, Stephanie; Jaffé, Yara; Bellhouse, Callum; Bianca Poggianti

    2017-01-01

    Despite the discovery of the morphology-density relation more than 30 years ago, the process driving the evolution of spiral galaxies into S0s in clusters is still widely debated. Ram pressure stripping--the removal of a galaxy's interstellar medium by the pressure of the intracluster medium through which it orbits--may help explain galactic evolution and quenching in clusters. MUSE (Multi Unit Spectroscopic Explorer) observational data of galaxy JO201 in cluster Abell 85 reveal it to be a jellyfish galaxy--one with an H-alpha emitting gas tail on only one side. We model the possible orbits for this galaxy, constrained by the cluster mass profile, line of sight velocity, and projected distance from the cluster center. Using Enzo, an adaptive mesh refinement hydrodynamics code, we simulate effects of ram pressure on this galaxy for a range of possible orbits. We present comparisons of both the morphology and velocity structure of our simulated galaxy to the observations of H-alpha emission.

  8. Towards Forming a Primordial Protostar in a Cosmological AMR Simulation

    NASA Astrophysics Data System (ADS)

    Turk, Matthew J.; Abel, Tom; O'Shea, Brian W.

    2008-03-01

    Modeling the formation of the first stars in the universe is a well-posed problem and ideally suited for computational investigation.We have conducted high-resolution numerical studies of the formation of primordial stars. Beginning with primordial initial conditions appropriate for a ΛCDM model, we used the Eulerian adaptive mesh refinement code (Enzo) to achieve unprecedented numerical resolution, resolving cosmological scales as well as sub-stellar scales simultaneously. Building on the work of Abel, Bryan and Norman (2002), we followed the evolution of the first collapsing cloud until molecular hydrogen is optically thick to cooling radiation. In addition, the calculations account for the process of collision-induced emission (CIE) and add approximations to the optical depth in both molecular hydrogen roto-vibrational cooling and CIE. Also considered are the effects of chemical heating/cooling from the formation/destruction of molecular hydrogen. We present the results of these simulations, showing the formation of a 10 Jupiter-mass protostellar core bounded by a strongly aspherical accretion shock. Accretion rates are found to be as high as one solar mass per year.

  9. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  10. Unveiling the Role of Galactic Rotation on Star Formation

    NASA Astrophysics Data System (ADS)

    Utreras, José; Becerra, Fernando; Escala, Andrés

    2016-12-01

    We study the star formation process at galactic scales and the role of rotation through numerical simulations of spiral and starburst galaxies using the adaptive mesh refinement code Enzo. We focus on the study of three integrated star formation laws found in the literature: the Kennicutt-Schmidt (KS) and Silk-Elmegreen (SE) laws, and the dimensionally homogeneous equation proposed by Escala {{{Σ }}}{SFR}\\propto \\sqrt{G/L}{{{Σ }}}{gas}1.5. We show that using the last we take into account the effects of the integration along the line of sight and find a unique regime of star formation for both types of galaxies, suppressing the observed bi-modality of the KS law. We find that the efficiencies displayed by our simulations are anti-correlated with the angular velocity of the disk Ω for the three laws studied in this work. Finally, we show that the dimensionless efficiency of star formation is well represented by an exponentially decreasing function of -1.9{{Ω }}{t}{ff}{ini}, where {t}{ff}{ini} is the initial free-fall time. This leads to a unique galactic star formation relation which reduces the scatter of the bi-modal KS, SE, and Escala relations by 43%, 43%, and 35%, respectively.

  11. The AGORA High-resolution Galaxy Simulations Comparison Project

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hoon; Abel, Tom; Agertz, Oscar; Bryan, Greg L.; Ceverino, Daniel; Christensen, Charlotte; Conroy, Charlie; Dekel, Avishai; Gnedin, Nickolay Y.; Goldbaum, Nathan J.; Guedes, Javiera; Hahn, Oliver; Hobbs, Alexander; Hopkins, Philip F.; Hummels, Cameron B.; Iannuzzi, Francesca; Keres, Dusan; Klypin, Anatoly; Kravtsov, Andrey V.; Krumholz, Mark R.; Kuhlen, Michael; Leitner, Samuel N.; Madau, Piero; Mayer, Lucio; Moody, Christopher E.; Nagamine, Kentaro; Norman, Michael L.; Onorbe, Jose; O'Shea, Brian W.; Pillepich, Annalisa; Primack, Joel R.; Quinn, Thomas; Read, Justin I.; Robertson, Brant E.; Rocha, Miguel; Rudd, Douglas H.; Shen, Sijing; Smith, Britton D.; Szalay, Alexander S.; Teyssier, Romain; Thompson, Robert; Todoroki, Keita; Turk, Matthew J.; Wadsley, James W.; Wise, John H.; Zolotov, Adi; AGORA Collaboration29,the

    2014-01-01

    We introduce the Assembling Galaxies Of Resolved Anatomy (AGORA) project, a comprehensive numerical study of well-resolved galaxies within the ΛCDM cosmology. Cosmological hydrodynamic simulations with force resolutions of ~100 proper pc or better will be run with a variety of code platforms to follow the hierarchical growth, star formation history, morphological transformation, and the cycle of baryons in and out of eight galaxies with halo masses M vir ~= 1010, 1011, 1012, and 1013 M ⊙ at z = 0 and two different ("violent" and "quiescent") assembly histories. The numerical techniques and implementations used in this project include the smoothed particle hydrodynamics codes GADGET and GASOLINE, and the adaptive mesh refinement codes ART, ENZO, and RAMSES. The codes share common initial conditions and common astrophysics packages including UV background, metal-dependent radiative cooling, metal and energy yields of supernovae, and stellar initial mass function. These are described in detail in the present paper. Subgrid star formation and feedback prescriptions will be tuned to provide a realistic interstellar and circumgalactic medium using a non-cosmological disk galaxy simulation. Cosmological runs will be systematically compared with each other using a common analysis toolkit and validated against observations to verify that the solutions are robust—i.e., that the astrophysical assumptions are responsible for any success, rather than artifacts of particular implementations. The goals of the AGORA project are, broadly speaking, to raise the realism and predictive power of galaxy simulations and the understanding of the feedback processes that regulate galaxy "metabolism." The initial conditions for the AGORA galaxies as well as simulation outputs at various epochs will be made publicly available to the community. The proof-of-concept dark-matter-only test of the formation of a galactic halo with a z = 0 mass of M vir ~= 1.7 × 1011 M ⊙ by nine different versions of the participating codes is also presented to validate the infrastructure of the project.

  12. Testing cosmic ray acceleration with radio relics: a high-resolution study using MHD and tracers

    NASA Astrophysics Data System (ADS)

    Wittor, D.; Vazza, F.; Brüggen, M.

    2017-02-01

    Weak shocks in the intracluster medium may accelerate cosmic-ray protons and cosmic-ray electrons differently depending on the angle between the upstream magnetic field and the shock normal. In this work, we investigate how shock obliquity affects the production of cosmic rays in high-resolution simulations of galaxy clusters. For this purpose, we performed a magnetohydrodynamical simulation of a galaxy cluster using the mesh refinement code ENZO. We use Lagrangian tracers to follow the properties of the thermal gas, the cosmic rays and the magnetic fields over time. We tested a number of different acceleration scenarios by varying the obliquity-dependent acceleration efficiencies of protons and electrons, and by examining the resulting hadronic γ-ray and radio emission. We find that the radio emission does not change significantly if only quasi-perpendicular shocks are able to accelerate cosmic-ray electrons. Our analysis suggests that radio-emitting electrons found in relics have been typically shocked many times before z = 0. On the other hand, the hadronic γ-ray emission from clusters is found to decrease significantly if only quasi-parallel shocks are allowed to accelerate cosmic ray protons. This might reduce the tension with the low upper limits on γ-ray emission from clusters set by the Fermi satellite.

  13. A New Approach for Simulating Galaxy Cluster Properties

    NASA Astrophysics Data System (ADS)

    Arieli, Y.; Rephaeli, Y.; Norman, M. L.

    2008-08-01

    We describe a subgrid model for including galaxies into hydrodynamical cosmological simulations of galaxy cluster evolution. Each galaxy construct—or galcon—is modeled as a physically extended object within which star formation, galactic winds, and ram pressure stripping of gas are modeled analytically. Galcons are initialized at high redshift (z ~ 3) after galaxy dark matter halos have formed but before the cluster has virialized. Each galcon moves self-consistently within the evolving cluster potential and injects mass, metals, and energy into intracluster (IC) gas through a well-resolved spherical interface layer. We have implemented galcons into the Enzo adaptive mesh refinement code and carried out a simulation of cluster formation in a ΛCDM universe. With our approach, we are able to economically follow the impact of a large number of galaxies on IC gas. We compare the results of the galcon simulation with a second, more standard simulation where star formation and feedback are treated using a popular heuristic prescription. One advantage of the galcon approach is explicit control over the star formation history of cluster galaxies. Using a galactic SFR derived from the cosmic star formation density, we find the galcon simulation produces a lower stellar fraction, a larger gas core radius, a more isothermal temperature profile, and a flatter metallicity gradient than the standard simulation, in better agreement with observations.

  14. Figuring Out Gas and Galaxies in Enzo (FOGGIE): Simulating effects of feedback on galactic outflows

    NASA Astrophysics Data System (ADS)

    Morris, Melissa Elizabeth; Corlies, Lauren; Peeples, Molly; Tumlinson, Jason; O'Shea, Brian; Smith, Britton

    2018-01-01

    The circumgalactic medium (CGM) is the region beyond the galactic disk in which gas is accreted through pristine inflows from the intergalactic medium and expelled from the galaxy by stellar feedback in large outflows that can then be recycled back onto the disk. These gas cycles connect the galactic disk with its cosmic environment, making the CGM a vital component of galaxy evolution. However, the CGM is primarily observed in absorption, which can be difficult to interpret. In this study, we use high resolution cosmological hydrodynamic simulations of a Milky Way mass halo evolved with the code Enzo to aid the interpretation of these observations. In our simulations, we vary feedback strength and observe the effect it has on galactic outflows and the evolution of the galaxy’s CGM. We compare the star formation rate of the galaxy with the velocity flux and mass outflow rate as a function of height above the plane of the galaxy in order to measure the strength of the outflows and how far they extend outside of the galaxy.This work was supported by The Space Astronomy Summer Program at STScI and NSF grant AST-1517908.

  15. Resolving the Small-Scale Structure of the Circumgalactic Medium in Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Corlies, Lauren

    2017-08-01

    We propose to resolve the circumgalactic medium (CGM) of L* galaxies down to 100 Msun (250 pc) in a full cosmological simulation to examine how mixing and cooling shape the physical nature of this gas on the scales expected from observations. COS has provided the best characterization of the low-z CGM to date, revealing the extent and amount of low- and high-ions and hinting at the kinematic relations between them. Yet cosmological galaxy simulations that can reproduce the stellar properties of galaxies have all struggled to reproduce these results even qualitatively. However, while the COS data imply that the low-ion absorption is occurring on sub-kpc scales, such scales can not be traced by simulations with resolution between 1-5 kpc in the CGM. Our proposed simulations will, for the first time, reach the resolution required to resolve these structures in the outer halo of L* galaxies. Using the adaptive mesh refinement code enzo, we will experiment with the size, shape, and resolution of an enforced high refinement region extending from the disk into the CGM to identify the best configuration for probing the flows of gas throughout the CGM. Our test case has found that increasing the resolution alone can have dramatic consequences for the density, temperature, and kinematics along a line of sight. Coupling this technique with an independent feedback study already underway will help disentangle the roles of global and small scale physics in setting the physical state of the CGM. Finally, we will use the MISTY pipeline to generate realistic mock spectra for direct comparison with COS data which will be made available through MAST.

  16. Ab Initio Simulations of a Supernova-driven Galactic Dynamo in an Isolated Disk Galaxy

    DOE PAGES

    Butsky, Iryna; Zrake, Jonathan; Kim, Ji-hoon; ...

    2017-07-10

    Here, we study the magnetic field evolution of an isolated spiral galaxy, using isolated Milky Way–mass galaxy formation simulations and a novel prescription for magnetohydrodynamic (MHD) supernova feedback. Our main result is that a galactic dynamo can be seeded and driven by supernova explosions, resulting in magnetic fields whose strength and morphology are consistent with observations. In our model, supernovae supply thermal energy and a low-level magnetic field along with their ejecta. The thermal expansion drives turbulence, which serves a dual role by efficiently mixing the magnetic field into the interstellar medium and amplifying it by means of a turbulentmore » dynamo. The computational prescription for MHD supernova feedback has been implemented within the publicly available ENZO code and is fully described in this paper. This improves upon ENZO's existing modules for hydrodynamic feedback from stars and active galaxies. We find that the field attains microgauss levels over gigayear timescales throughout the disk. The field also develops a large-scale structure, which appears to be correlated with the disk's spiral arm density structure. We find that seeding of the galactic dynamo by supernova ejecta predicts a persistent correlation between gas metallicity and magnetic field strength. We also generate all-sky maps of the Faraday rotation measure from the simulation-predicted magnetic field, and we present a direct comparison with observations.« less

  17. Ab Initio Simulations of a Supernova-driven Galactic Dynamo in an Isolated Disk Galaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butsky, Iryna; Zrake, Jonathan; Kim, Ji-hoon

    We study the magnetic field evolution of an isolated spiral galaxy, using isolated Milky Way–mass galaxy formation simulations and a novel prescription for magnetohydrodynamic (MHD) supernova feedback. Our main result is that a galactic dynamo can be seeded and driven by supernova explosions, resulting in magnetic fields whose strength and morphology are consistent with observations. In our model, supernovae supply thermal energy and a low-level magnetic field along with their ejecta. The thermal expansion drives turbulence, which serves a dual role by efficiently mixing the magnetic field into the interstellar medium and amplifying it by means of a turbulent dynamo.more » The computational prescription for MHD supernova feedback has been implemented within the publicly available ENZO code and is fully described in this paper. This improves upon ENZO 's existing modules for hydrodynamic feedback from stars and active galaxies. We find that the field attains microgauss levels over gigayear timescales throughout the disk. The field also develops a large-scale structure, which appears to be correlated with the disk’s spiral arm density structure. We find that seeding of the galactic dynamo by supernova ejecta predicts a persistent correlation between gas metallicity and magnetic field strength. We also generate all-sky maps of the Faraday rotation measure from the simulation-predicted magnetic field, and we present a direct comparison with observations.« less

  18. Ab Initio Simulations of a Supernova-driven Galactic Dynamo in an Isolated Disk Galaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butsky, Iryna; Zrake, Jonathan; Kim, Ji-hoon

    Here, we study the magnetic field evolution of an isolated spiral galaxy, using isolated Milky Way–mass galaxy formation simulations and a novel prescription for magnetohydrodynamic (MHD) supernova feedback. Our main result is that a galactic dynamo can be seeded and driven by supernova explosions, resulting in magnetic fields whose strength and morphology are consistent with observations. In our model, supernovae supply thermal energy and a low-level magnetic field along with their ejecta. The thermal expansion drives turbulence, which serves a dual role by efficiently mixing the magnetic field into the interstellar medium and amplifying it by means of a turbulentmore » dynamo. The computational prescription for MHD supernova feedback has been implemented within the publicly available ENZO code and is fully described in this paper. This improves upon ENZO's existing modules for hydrodynamic feedback from stars and active galaxies. We find that the field attains microgauss levels over gigayear timescales throughout the disk. The field also develops a large-scale structure, which appears to be correlated with the disk's spiral arm density structure. We find that seeding of the galactic dynamo by supernova ejecta predicts a persistent correlation between gas metallicity and magnetic field strength. We also generate all-sky maps of the Faraday rotation measure from the simulation-predicted magnetic field, and we present a direct comparison with observations.« less

  19. Massive and refined: A sample of large galaxy clusters simulated at high resolution. I: Thermal gas and properties of shock waves

    NASA Astrophysics Data System (ADS)

    Vazza, F.; Brunetti, G.; Gheller, C.; Brunino, R.

    2010-11-01

    We present a sample of 20 massive galaxy clusters with total virial masses in the range of 6 × 10 14 M ⊙ ⩽ Mvir ⩽ 2 × 10 15 M ⊙, re-simulated with a customized version of the 1.5. ENZO code employing adaptive mesh refinement. This technique allowed us to obtain unprecedented high spatial resolution (≈25 kpc/h) up to the distance of ˜3 virial radii from the clusters center, and makes it possible to focus with the same level of detail on the physical properties of the innermost and of the outermost cluster regions, providing new clues on the role of shock waves and turbulent motions in the ICM, across a wide range of scales. In this paper, a first exploratory study of this data set is presented. We report on the thermal properties of galaxy clusters at z = 0. Integrated and morphological properties of gas density, gas temperature, gas entropy and baryon fraction distributions are discussed, and compared with existing outcomes both from the observational and from the numerical literature. Our cluster sample shows an overall good consistency with the results obtained adopting other numerical techniques (e.g. Smoothed Particles Hydrodynamics), yet it provides a more accurate representation of the accretion patterns far outside the cluster cores. We also reconstruct the properties of shock waves within the sample by means of a velocity-based approach, and we study Mach numbers and energy distributions for the various dynamical states in clusters, giving estimates for the injection of Cosmic Rays particles at shocks. The present sample is rather unique in the panorama of cosmological simulations of massive galaxy clusters, due to its dynamical range, statistics of objects and number of time outputs. For this reason, we deploy a public repository of the available data, accessible via web portal at http://data.cineca.it.

  20. The Scylla Multi-Code Comparison Project

    NASA Astrophysics Data System (ADS)

    Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team

    2016-01-01

    Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.

  1. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    NASA Astrophysics Data System (ADS)

    Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose; Bullock, James S.; Joung, M. Ryan; Devriendt, Julien; Ceverino, Daniel; Kereš, Dušan; Hopkins, Philip F.; Faucher-Giguère, Claude-André

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ˜4 times more specific angular momentum in cold halo gas (λ cold ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  2. The effect of binding energy and resolution in simulations of the common envelope binary interaction

    NASA Astrophysics Data System (ADS)

    Iaconi, Roberto; De Marco, Orsola; Passy, Jean-Claude; Staff, Jan

    2018-06-01

    The common envelope binary interaction remains one of the least understood phases in the evolution of compact binaries, including those that result in Type Ia supernovae and in mergers that emit detectable gravitational waves. In this work, we continue the detailed and systematic analysis of 3D hydrodynamic simulations of the common envelope interaction aimed at understanding the reliability of the results. Our first set of simulations replicate the five simulations of Passy et al. (a 0.88 M⊙, 90 R⊙ red giant branch (RGB) primary with companions in the range 0.1-0.9 M⊙) using a new adaptive mesh refinement gravity solver implemented on our modified version of the hydrodynamic code ENZO. Despite smaller final separations obtained, these more resolved simulations do not alter the nature of the conclusions that are drawn. We also carry out five identical simulations but with a 2.0 M⊙ primary RGB star with the same core mass as the Passy et al. simulations, isolating the effect of the envelope binding energy. With a more bound envelope, all the companions in-spiral faster and deeper, though relatively less gas is unbound. Even at the highest resolution, the final separation attained by simulations with a heavier primary is similar to the size of the smoothed potential even if we account for the loss of some angular momentum by the simulation. As a result, we suggest that an ˜2.0 M⊙ RGB primary may possibly end in a merger with companions as massive as 0.6 M⊙, something that would not be deduced using analytical arguments based on energy conservation.

  3. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ∼4 times more specific angular momentum in cold halo gas (more » λ {sub cold} ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.« less

  4. Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.

  5. Developing and Modifying Behavioral Coding Schemes in Pediatric Psychology: A Practical Guide

    PubMed Central

    McMurtry, C. Meghan; Chambers, Christine T.; Bakeman, Roger

    2015-01-01

    Objectives To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. Methods This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. Results A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Conclusions Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. PMID:25416837

  6. Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.

    PubMed

    Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger

    2015-01-01

    To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Fully coupled simulation of cosmic reionization. I. numerical methods and tests

    DOE PAGES

    Norman, Michael L.; Reynolds, Daniel R.; So, Geoffrey C.; ...

    2015-01-09

    Here, we describe an extension of the Enzo code to enable fully coupled radiation hydrodynamical simulation of inhomogeneous reionization in large similar to(100 Mpc)(3) cosmological volumes with thousands to millions of point sources. We solve all dynamical, radiative transfer, thermal, and ionization processes self-consistently on the same mesh, as opposed to a postprocessing approach which coarse-grains the radiative transfer. But, we employ a simple subgrid model for star formation which we calibrate to observations. The numerical method presented is a modification of an earlier method presented in Reynolds et al. differing principally in the operator splitting algorithm we use tomore » advance the system of equations. Radiation transport is done in the gray flux-limited diffusion (FLD) approximation, which is solved by implicit time integration split off from the gas energy and ionization equations, which are solved separately. This results in a faster and more robust scheme for cosmological applications compared to the earlier method. The FLD equation is solved using the hypre optimally scalable geometric multigrid solver from LLNL. By treating the ionizing radiation as a grid field as opposed to rays, our method is scalable with respect to the number of ionizing sources, limited only by the parallel scaling properties of the radiation solver. We test the speed and accuracy of our approach on a number of standard verification and validation tests. We show by direct comparison with Enzo's adaptive ray tracing method Moray that the well-known inability of FLD to cast a shadow behind opaque clouds has a minor effect on the evolution of ionized volume and mass fractions in a reionization simulation validation test. Finally, we illustrate an application of our method to the problem of inhomogeneous reionization in a 80 Mpc comoving box resolved with 3200(3) Eulerian grid cells and dark matter particles.« less

  8. Statistical Analysis of the AIAA Drag Prediction Workshop CFD Solutions

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop (DPW), held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third AIAA Drag Prediction Workshop, held in June 2006, focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This report compares the transonic cruise prediction results of the second and third workshops using statistical analysis.

  9. 78 FR 21500 - Proposed Collection; Comment Request for Notice 2009-90

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    ... 2009- 90, Production Tax Credit for Refined Coal. DATES: Written comments should be received on or... INFORMATION: Title: Production Tax Credit for Refined Coal. OMB Number: 1545-2158. Notice Number: Notice 2009... the tax credit under Sec. 45 of the Internal Revenue Code (Code) for refined coal. Current Actions...

  10. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  11. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  12. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  13. Two Regimes of Turbulent Fragmentation and the Stellar Initial Mass Function from Primordial to Present-Day Star Formation

    NASA Astrophysics Data System (ADS)

    Padoan, Paolo; Nordlund, Åke; Kritsuk, Alexei G.; Norman, Michael L.; Li, Pak Shing

    2007-06-01

    The Padoan and Nordlund model of the stellar initial mass function (IMF) is derived from low-order statistics of supersonic turbulence, neglecting gravity (e.g., gravitational fragmentation, accretion, and merging). In this work, the predictions of that model are tested using the largest numerical experiments of supersonic hydrodynamic (HD) and magnetohydrodynamic (MHD) turbulence to date (~10003 computational zones) and three different codes (Enzo, Zeus, and the Stagger code). The model predicts a power-law distribution for large masses, related to the turbulence-energy power-spectrum slope and the shock-jump conditions. This power-law mass distribution is confirmed by the numerical experiments. The model also predicts a sharp difference between the HD and MHD regimes, which is recovered in the experiments as well, implying that the magnetic field, even below energy equipartition on the large scale, is a crucial component of the process of turbulent fragmentation. These results suggest that the stellar IMF of primordial stars may differ from that in later epochs of star formation, due to differences in both gas temperature and magnetic field strength. In particular, we find that the IMF of primordial stars born in turbulent clouds may be narrowly peaked around a mass of order 10 Msolar, as long as the column density of such clouds is not much in excess of 1022 cm-2.

  14. Publicly Releasing a Large Simulation Dataset with NDS Labs

    NASA Astrophysics Data System (ADS)

    Goldbaum, Nathan

    2016-03-01

    Optimally, all publicly funded research should be accompanied by the tools, code, and data necessary to fully reproduce the analysis performed in journal articles describing the research. This ideal can be difficult to attain, particularly when dealing with large (>10 TB) simulation datasets. In this lightning talk, we describe the process of publicly releasing a large simulation dataset to accompany the submission of a journal article. The simulation was performed using Enzo, an open source, community-developed N-body/hydrodynamics code and was analyzed using a wide range of community- developed tools in the scientific Python ecosystem. Although the simulation was performed and analyzed using an ecosystem of sustainably developed tools, we enable sustainable science using our data by making it publicly available. Combining the data release with the NDS Labs infrastructure allows a substantial amount of added value, including web-based access to analysis and visualization using the yt analysis package through an IPython notebook interface. In addition, we are able to accompany the paper submission to the arXiv preprint server with links to the raw simulation data as well as interactive real-time data visualizations that readers can explore on their own or share with colleagues during journal club discussions. It is our hope that the value added by these services will substantially increase the impact and readership of the paper.

  15. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  16. Fixed mesh refinement in the characteristic formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Barreto, W.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2017-08-01

    We implement a spatially fixed mesh refinement under spherical symmetry for the characteristic formulation of General Relativity. The Courant-Friedrich-Levy condition lets us deploy an adaptive resolution in (retarded-like) time, even for the nonlinear regime. As test cases, we replicate the main features of the gravitational critical behavior and the spacetime structure at null infinity using the Bondi mass and the News function. Additionally, we obtain the global energy conservation for an extreme situation, i.e. in the threshold of the black hole formation. In principle, the calibrated code can be used in conjunction with an ADM 3+1 code to confirm the critical behavior recently reported in the gravitational collapse of a massless scalar field in an asymptotic anti-de Sitter spacetime. For the scenarios studied, the fixed mesh refinement offers improved runtime and results comparable to code without mesh refinement.

  17. Effect of a Hypocretin/Orexin Antagonist on Neurocognitive Performance

    DTIC Science & Technology

    2013-12-18

    9.28MARCH2013 AE Adverse Event AASM American Academv of Sleep Medicine BzRAs B enzo diazepine Receptor A goni sts CRC Clinical Research Center CCRC University...medical conditions; 12.) Current use of statins, ketoconazole, prescription or over- the-counter medications or herbal supplements containing...medications or herbal supplements containing psychoactive properties or stimulants in the judgment of the Investigator-Sponsor or Medical Monitor; 13

  18. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  19. Statistical Analysis of CFD Solutions from 2nd Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Hemsch, M. J.; Morrison, J. H.

    2004-01-01

    In June 2001, the first AIAA Drag Prediction Workshop was held to evaluate results obtained from extensive N-Version testing of a series of RANS CFD codes. The geometry used for the computations was the DLR-F4 wing-body combination which resembles a medium-range subsonic transport. The cases reported include the design cruise point, drag polars at eight Mach numbers, and drag rise at three values of lift. Although comparisons of the code-to-code medians with available experimental data were similar to those obtained in previous studies, the code-to-code scatter was more than an order-of-magnitude larger than expected and far larger than desired for design and for experimental validation. The second Drag Prediction Workshop was held in June 2003 with emphasis on the determination of installed pylon-nacelle drag increments and on grid refinement studies. The geometry used was the DLR-F6 wing-body-pylon-nacelle combination for which the design cruise point and the cases run were similar to the first workshop except for additional runs on coarse and fine grids to complement the runs on medium grids. The code-to-code scatter was significantly reduced for the wing-body configuration compared to the first workshop, although still much larger than desired. However, the grid refinement studies showed no sign$cant improvement in code-to-code scatter with increasing grid refinement.

  20. Joint and Combined Military Force: A Possible Solution to African Economic Problems.

    DTIC Science & Technology

    1992-06-01

    New York Monthly Review Press, N.Y.: 1966). 4. Enzo Falloto and Frenando Henrique Cardoso, Dependencia (Mexico Siglo XX1: 1969). 5. David Apter and...the most volatile and explosive political destabilizers in Africa today is religion . However, religious differences serving as catalysts to political...crises is not peculiar to African. From the beginning of civilization, religion has caused major political crises within and between nations. Nearly

  1. Common Envelope Light Curves. I. Grid-code Module Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, Pablo; Marco, Orsola De; Staff, Jan E.

    The common envelope (CE) binary interaction occurs when a star transfers mass onto a companion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The CE interaction is the gateway of all evolved compact binaries, all stellar mergers, and likely many of the stellar transients witnessed to date. CE simulations are needed to understand this interaction and to interpret stars and binaries thought to be the byproduct of this stage. At this time, simulations are unable to reproduce the few observational data available and several ideas have been putmore » forward to address their shortcomings. The need for more definitive simulation validation is pressing and is already being fulfilled by observations from time-domain surveys. In this article, we present an initial method and its implementation for post-processing grid-based CE simulations to produce the light curve so as to compare simulations with upcoming observations. Here we implemented a zeroth order method to calculate the light emitted from CE hydrodynamic simulations carried out with the 3D hydrodynamic code Enzo used in unigrid mode. The code implements an approach for the computation of luminosity in both optically thick and optically thin regimes and is tested using the first 135 days of the CE simulation of Passy et al., where a 0.8  M {sub ⊙} red giant branch star interacts with a 0.6  M {sub ⊙} companion. This code is used to highlight two large obstacles that need to be overcome before realistic light curves can be calculated. We explain the nature of these problems and the attempted solutions and approximations in full detail to enable the next step to be identified and implemented. We also discuss our simulation in relation to recent data of transients identified as CE interactions.« less

  2. Additional Improvements to the NASA Lewis Ice Accretion Code LEWICE

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Bidwell, Colin S.

    1995-01-01

    Due to the feedback of the user community, three major features have been added to the NASA Lewis ice accretion code LEWICE. These features include: first, further improvements to the numerics of the code so that more time steps can be run and so that the code is more stable; second, inclusion and refinement of the roughness prediction model described in an earlier paper; third, inclusion of multi-element trajectory and ice accretion capabilities to LEWICE. This paper will describe each of these advancements in full and make comparisons with the experimental data available. Further refinement of these features and inclusion of additional features will be performed as more feedback is received.

  3. Leveraging the NLM map from SNOMED CT to ICD-10-CM to facilitate adoption of ICD-10-CM.

    PubMed

    Cartagena, F Phil; Schaeffer, Molly; Rifai, Dorothy; Doroshenko, Victoria; Goldberg, Howard S

    2015-05-01

    Develop and test web services to retrieve and identify the most precise ICD-10-CM code(s) for a given clinical encounter. Facilitate creation of user interfaces that 1) provide an initial shortlist of candidate codes, ideally visible on a single screen; and 2) enable code refinement. To satisfy our high-level use cases, the analysis and design process involved reviewing available maps and crosswalks, designing the rule adjudication framework, determining necessary metadata, retrieving related codes, and iteratively improving the code refinement algorithm. The Partners ICD-10-CM Search and Mapping Services (PI-10 Services) are SOAP web services written using Microsoft's.NET 4.0 Framework, Windows Communications Framework, and SQL Server 2012. The services cover 96% of the Partners problem list subset of SNOMED CT codes that map to ICD-10-CM codes and can return up to 76% of the 69,823 billable ICD-10-CM codes prior to creation of custom mapping rules. We consider ways to increase 1) the coverage ratio of the Partners problem list subset of SNOMED CT codes and 2) the upper bound of returnable ICD-10-CM codes by creating custom mapping rules. Future work will investigate the utility of the transitive closure of SNOMED CT codes and other methods to assist in custom rule creation and, ultimately, to provide more complete coverage of ICD-10-CM codes. ICD-10-CM will be easier for clinicians to manage if applications display short lists of candidate codes from which clinicians can subsequently select a code for further refinement. The PI-10 Services support ICD-10 migration by implementing this paradigm and enabling users to consistently and accurately find the best ICD-10-CM code(s) without translation from ICD-9-CM. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  5. Incremental triangulation by way of edge swapping and local optimization

    NASA Technical Reports Server (NTRS)

    Wiltberger, N. Lyn

    1994-01-01

    This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.

  6. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  7. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  8. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  9. Early Detection of Breast Cancer Using Posttranslationally Modified Biomarkers

    DTIC Science & Technology

    2013-09-01

    goat antimouse IgG in PBS (Jackson Immuno Research Laboratory Inc., West Grove, PA) was added to each well. The slides were incubated for another 40...a concentration of 10 μg/mL; this was followed by incubation for 1 h at 37 °C. The secondary antibody, FITC-conjugated goat antimouse, was incubated...mIgG) was used as the negative control, and glycolipid-specific antibody anti-Globo-H (MBr1)14−16 (Enzo Life Sciences, Inc., Farm - ingdale, NY) was used

  10. REFINE WETLAND REGULATORY PROGRAM

    EPA Science Inventory

    The Tribes will work toward refining a regulatory program by taking a draft wetland conservation code with permitting incorporated to TEB for review. Progress will then proceed in developing a permit tracking system that will track both Tribal and fee land sites within reservati...

  11. Simulations of extragalactic magnetic fields and of their observables

    NASA Astrophysics Data System (ADS)

    Vazza, F.; Brüggen, M.; Gheller, C.; Hackstein, S.; Wittor, D.; Hinz, P. M.

    2017-12-01

    The origin of extragalactic magnetic fields is still poorly understood. Based on a dedicated suite of cosmological magneto-hydrodynamical simulations with the ENZO code we have performed a survey of different models that may have caused present-day magnetic fields in galaxies and galaxy clusters. The outcomes of these models differ in cluster outskirts, filaments, sheets and voids and we use these simulations to find observational signatures of magnetogenesis. With these simulations, we predict the signal of extragalactic magnetic fields in radio observations of synchrotron emission from the cosmic web, in Faraday rotation, in the propagation of ultra high energy cosmic rays, in the polarized signal from fast radio bursts at cosmological distance and in spectra of distant blazars. In general, primordial scenarios in which present-day magnetic fields originate from the amplification of weak (⩽nG ) uniform seed fields result in more homogeneous and relatively easier to observe magnetic fields than astrophysical scenarios, in which present-day fields are the product of feedback processes triggered by stars and active galaxies. In the near future the best evidence for the origin of cosmic magnetic fields will most likely come from a combination of synchrotron emission and Faraday rotation observed at the periphery of large-scale structures.

  12. Gary Refining Company emerges from Chapter 11 bankruptcy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-09-01

    On July 24, 1986 Gary Refining Company, Inc. announced that the Reorganization Plan for Gary Refining Company, Inc., Gary Refining Company, and Mesa Refining, Inc. has been approved by the United States bankruptcy Court (District of Colorado). The companies filed for protection from creditors on March 4, 1985 under Chapter 11 of the United States Bankruptcy Code. Payments to creditors are expected to begin upon start-up of the Gary Refining Company (GRC) refinery in Fruita, Colorado after delivery of shale oil from Union Oil's Parachute Creek plant. In the interim, GRC will continue to explore options for possible startup (onmore » a full scale or partial basis) prior to that time.« less

  13. Genomics and the Public Health Code of Ethics

    PubMed Central

    Thomas, James C.; Irwin, Debra E.; Zuiker, Erin Shaugnessy; Millikan, Robert C.

    2005-01-01

    We consider the public health applications of genomic technologies as viewed through the lens of the public health code of ethics. We note, for example, the potential for genomics to increase our appreciation for the public health value of interdependence, the potential for some genomic tools to exacerbate health disparities because of their inaccessibility by the poor and the way in which genomics forces public health to refine its notions of prevention. The public health code of ethics sheds light on concerns raised by commercial genomic products that are not discussed in detail by more clinically oriented perspectives. In addition, the concerns raised by genomics highlight areas of our understanding of the ethical principles of public health in which further refinement may be necessary. PMID:16257942

  14. Simulations of ultra-high energy cosmic rays in the local Universe and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.

    2018-04-01

    We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.

  15. Coding and Comprehension in Skilled Reading and Implications for Reading Instruction.

    ERIC Educational Resources Information Center

    Perfetti, Charles A.; Lesgold, Alan M.

    A view of skilled reading is suggested that emphasizes an intimate connection between coding and comprehension. It is suggested that skilled comprehension depends on a highly refined facility for generating and manipulating language codes, especially at the phonetic/articulatory level. The argument is developed that decoding expertise should be a…

  16. A refined protocol for coding nursing home residents' comments during satisfaction interviews.

    PubMed

    Levy-Storms, Lené; Simmons, Sandra F; Gutierrez, Veronica F; Miller-Martinez, Dana; Hickey, Kelly; Schnelle, John F

    2005-11-01

    This study's objective was to refine a method for coding nursing home (NH) residents' comments about their perceptions of care into unmet needs specific to the manner and frequency of care delivery. NH residents (N=69) were interviewed with both closed-ended (i.e., forced-choice) and open-ended (i.e., residents' own words) questions about their perceptions of care across eight care domains. Unmet needs included comments indicating that residents desired a change in staff- and non-staff-related care. Staff-related unmet needs were further coded into unmet emotional support (i.e., emotional support or manner of care delivery) and instrumental (i.e., instrumental support or frequency of care) needs. Of 66 residents who commented, 66% expressed at least one unmet need across eight care domains. Among these 44 residents, 52% and 84% had unmet emotional support and instrumental support needs, respectively (kappa=68 and.92). An additional 18% expressed both unmet emotional support and instrumental support needs. The refined method offers a systematic way to code residents' comments about their care into unmet needs related to the manner and frequency of care delivery. The findings have direct implications for the identification of care areas in need of improvement from the resident's perspective and the evaluation of improvement efforts.

  17. Historical Roots and Future Perspectives Related to Nursing Ethics.

    ERIC Educational Resources Information Center

    Freitas, Lorraine

    1990-01-01

    This article traces the evolution of the development and refinement of the professional code from concerns about the ethical conduct of nurses to its present state as a professional code for all nurses. The relationship of the Ethics Committee of the American Nurses' Association to the development of the code is also discussed. (Author/MLW)

  18. Redundant Coding in Visual Search Displays: Effects of Shape and Colour.

    DTIC Science & Technology

    1997-02-01

    results for refining color selection algorithms and for color coding in situations where the gamut of available colors is limited. In a secondary set of analyses, we note large performance differences as a function of target shape.

  19. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  20. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  1. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  2. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  3. Bellows flow-induced vibrations

    NASA Technical Reports Server (NTRS)

    Tygielski, P. J.; Smyly, H. M.; Gerlach, C. R.

    1983-01-01

    The bellows flow excitation mechanism and results of comprehensive test program are summarized. The analytical model for predicting bellows flow induced stress is refined. The model includes the effects of an upstream elbow, arbitrary geometry, and multiple piles. A refined computer code for predicting flow induced stress is described which allows life prediction if a material S-N diagram is available.

  4. Transonic Drag Prediction on a DLR-F6 Transport Configuration Using Unstructured Grid Solvers

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Frink, N. T.; Mavriplis, D. J.; Rausch, R. D.; Milholen, W. E.

    2004-01-01

    A second international AIAA Drag Prediction Workshop (DPW-II) was organized and held in Orlando Florida on June 21-22, 2003. The primary purpose was to inves- tigate the code-to-code uncertainty. address the sensitivity of the drag prediction to grid size and quantify the uncertainty in predicting nacelle/pylon drag increments at a transonic cruise condition. This paper presents an in-depth analysis of the DPW-II computational results from three state-of-the-art unstructured grid Navier-Stokes flow solvers exercised on similar families of tetrahedral grids. The flow solvers are USM3D - a tetrahedral cell-centered upwind solver. FUN3D - a tetrahedral node-centered upwind solver, and NSU3D - a general element node-centered central-differenced solver. For the wingbody, the total drag predicted for a constant-lift transonic cruise condition showed a decrease in code-to-code variation with grid refinement as expected. For the same flight condition, the wing/body/nacelle/pylon total drag and the nacelle/pylon drag increment predicted showed an increase in code-to-code variation with grid refinement. Although the range in total drag for the wingbody fine grids was only 5 counts, a code-to-code comparison of surface pressures and surface restricted streamlines indicated that the three solvers were not all converging to the same flow solutions- different shock locations and separation patterns were evident. Similarly, the wing/body/nacelle/pylon solutions did not appear to be converging to the same flow solutions. Overall, grid refinement did not consistently improve the correlation with experimental data for either the wingbody or the wing/body/nacelle pylon configuration. Although the absolute values of total drag predicted by two of the solvers for the medium and fine grids did not compare well with the experiment, the incremental drag predictions were within plus or minus 3 counts of the experimental data. The correlation with experimental incremental drag was not significantly changed by specifying transition. Although the sources of code-to-code variation in force and moment predictions for the three unstructured grid codes have not yet been identified, the current study reinforces the necessity of applying multiple codes to the same application to assess uncertainty.

  5. Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Colella, Phillip

    2007-11-01

    We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.

  6. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  7. Modified Mean-Pyramid Coding Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Romer, Richard

    1996-01-01

    Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.

  8. FDA Procedures for Standardization and Certification of Retail Food Inspection/Training Officers, 2000.

    ERIC Educational Resources Information Center

    Food and Drug Administration (DHHS/PHS), Rockville, MD.

    This document provides information, standards, and behavioral objectives for standardization and certification of retail food inspection personnel in the Food and Drug Administration (FDA). The procedures described in the document are based on the FDA Food Code, updated to reflect current Food Code provisions and to include a more refined focus on…

  9. BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package

    NASA Astrophysics Data System (ADS)

    Mitran, Sorin

    2011-04-01

    The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.

  10. Development of Multidisciplinary, Multifidelity Analysis, Integration, and Optimization of Aerospace Vehicles

    DTIC Science & Technology

    2010-02-27

    investigated in more detail. The intermediate level of fidelity, though more expensive, is then used to refine the analysis , add geometric detail, and...design stage is used to further refine the analysis , narrowing the design to a handful of options. Figure 1. Integrated Hierarchical Framework. In...computational structural and computational fluid modeling. For the structural analysis tool we used McIntosh Structural Dynamics’ finite element code CNEVAL

  11. Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco; Martin, Daniel F.

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  12. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  13. Topics in Extrasolar Planet Characterization

    NASA Astrophysics Data System (ADS)

    Howe, Alex Ryan

    I present four papers exploring different topics in the area of characterizing the atmospheric and bulk properties of extrasolar planets. In these papers, I present two new codes, in various forms, for modeling these objects. A code to generate theoretical models of transit spectra of exoplanets is featured in the first paper and is refined and expanded into the APOLLO code for spectral modeling and parameter retrieval in the fourth paper. Another code to model the internal structure and evolution of planets is featured in the second and third papers. The first paper presents transit spectra models of GJ 1214b and other super-Earth and mini-Neptune type planets--planets with a "solid", terrestrial composition and relatively small planets with a thick hydrogen-helium atmosphere, respectively--and fit them to observational data to estimate the atmospheric compositions and cloud properties of these planets. The second paper presents structural models of super-Earth and mini-Neptune type planets and estimates their bulk compositions from mass and radius estimates. The third paper refines these models with evolutionary calculations of thermal contraction and ultraviolet-driven mass loss. Here, we estimate the boundaries of the parameter space in which planets lose their initial hydrogen-helium atmospheres completely, and we also present formation and evolution scenarios for the planets in the Kepler-11 system. The fourth paper uses more refined transit spectra models, this time for hot jupiter type planets, to explore the methods to design optimal observing programs for the James Webb Space Telescope to quantitatively measure the atmospheric compositions and other properties of these planets.

  14. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

  15. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  16. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    NASA Astrophysics Data System (ADS)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  17. User Manual for Beta Version of TURBO-GRD: A Software System for Interactive Two-Dimensional Boundary/ Field Grid Generation, Modification, and Refinement

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Henderson, Todd L.; Bidwell, Colin S.; Braun, Donald C.; Chung, Joongkee

    1998-01-01

    TURBO-GRD is a software system for interactive two-dimensional boundary/field grid generation. modification, and refinement. Its features allow users to explicitly control grid quality locally and globally. The grid control can be achieved interactively by using control points that the user picks and moves on the workstation monitor or by direct stretching and refining. The techniques used in the code are the control point form of algebraic grid generation, a damped cubic spline for edge meshing and parametric mapping between physical and computational domains. It also performs elliptic grid smoothing and free-form boundary control for boundary geometry manipulation. Internal block boundaries are constructed and shaped by using Bezier curve. Because TURBO-GRD is a highly interactive code, users can read in an initial solution, display its solution contour in the background of the grid and control net, and exercise grid modification using the solution contour as a guide. This process can be called an interactive solution-adaptive grid generation.

  18. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  20. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, J.C.

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  2. The effect of gas physics on the halo mass function

    NASA Astrophysics Data System (ADS)

    Stanek, R.; Rudd, D.; Evrard, A. E.

    2009-03-01

    Cosmological tests based on cluster counts require accurate calibration of the space density of massive haloes, but most calibrations to date have ignored complex gas physics associated with halo baryons. We explore the sensitivity of the halo mass function to baryon physics using two pairs of gas-dynamic simulations that are likely to bracket the true behaviour. Each pair consists of a baseline model involving only gravity and shock heating, and a refined physics model aimed at reproducing the observed scaling of the hot, intracluster gas phase. One pair consists of billion-particle resimulations of the original 500h-1Mpc Millennium Simulation of Springel et al., run with the smoothed particle hydrodynamics (SPH) code GADGET-2 and using a refined physics treatment approximated by pre-heating (PH) at high redshift. The other pair are high-resolution simulations from the adaptive-mesh refinement code ART, for which the refined treatment includes cooling, star formation and supernova feedback (CSF). We find that, although the mass functions of the gravity-only (GO) treatments are consistent with the recent calibration of Tinker et al. (2008), both pairs of simulations with refined baryon physics show significant deviations. Relative to the GO case, the masses of ~1014h-1Msolar haloes in the PH and CSF treatments are shifted by the averages of -15 +/- 1 and +16 +/- 2 per cent, respectively. These mass shifts cause ~30 per cent deviations in number density relative to the Tinker function, significantly larger than the 5 per cent statistical uncertainty of that calibration.

  3. Coding response to a case-mix measurement system based on multiple diagnoses.

    PubMed

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  4. Discussion on ``Teaching the Second Law''

    NASA Astrophysics Data System (ADS)

    Silbey, Robert; Beretta, Gian Paolo; Cengel, Yunus; Foley, Andrew; Gyftopoulos, Elias P.; Hatsopoulos, George N.; Keck, James C.; Lewins, Jeffery; Lior, Noam; Nieuwenhuizen, Theodorus M.; Steinfeld, Jeffrey; von Spakovsky, Michael R.; Wang, Lin-Shu; Zanchini, Enzo

    2008-08-01

    This article reports an open discussion that took place during the Keenan Symposium "Meeting the Entropy Challenge" (held in Cambridge, Massachusetts, on October 5, 2007) following the short presentations—each reported as a separate article in the present volume—by Joseph Smith Jr., Howard Butler, Andrew Foley, Kimberly Hamad-Schifferli, Bernhardt Trout, Jeffery Lewins, Enzo Zanchini, and Michael von Spakovsky. All panelists and the audience were asked to address the following questions • Why is the second law taught in so many different ways? Why so many textbooks on thermodynamics? Why so many schools of thought? • Some say that thermodynamics is limited to equilibrium, others that it extends to nonequilibrium. How is entropy defined for nonequilibrium states?

  5. Coupled field effects in BWR stability simulations using SIMULATE-3K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borkowski, J.; Smith, K.; Hagrman, D.

    1996-12-31

    The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.

  6. Navier-Stokes and Euler solutions for lee-side flows over supersonic delta wings. A correlation with experiment

    NASA Technical Reports Server (NTRS)

    Mcmillin, S. Naomi; Thomas, James L.; Murman, Earll M.

    1990-01-01

    An Euler flow solver and a thin layer Navier-Stokes flow solver were used to numerically simulate the supersonic leeside flow fields over delta wings which were observed experimentally. Three delta wings with 75, 67.5, and 60 deg leading edge sweeps were computed over an angle-of-attack range of 4 to 20 deg at a Mach number 2.8. The Euler code and Navier-Stokes code predict equally well the primary flow structure where the flow is expected to be separated or attached at the leading edge based on the Stanbrook-Squire boundary. The Navier-Stokes code is capable of predicting both the primary and the secondary flow features for the parameter range investigated. For those flow conditions where the Euler code did not predict the correct type of primary flow structure, the Navier-Stokes code illustrated that the flow structure is sensitive to boundary layer model. In general, the laminar Navier-Stokes solutions agreed better with the experimental data, especially for the lower sweep delta wings. The computational results and a detailed re-examination of the experimental data resulted in a refinement of the flow classifications. This refinement in the flow classification results in the separation bubble with the shock flow type as the intermediate flow pattern between separated and attached flows.

  7. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  8. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  9. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  10. PELEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.

  11. A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs

    DTIC Science & Technology

    2005-05-24

    source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in

  12. The Formation of the First Cosmic Structures and the Physics of the z ~ 20 Universe

    NASA Astrophysics Data System (ADS)

    O'Leary, Ryan M.; McQuinn, Matthew

    2012-11-01

    We perform a suite of cosmological simulations in the ΛCDM paradigm of the formation of the first structures in the universe prior to astrophysical reheating and reionization (15 <~ z < 200). These are the first simulations initialized in a manner that self-consistently accounts for the impact of pressure on the rate of growth of modes, temperature fluctuations in the gas, and the dark matter-baryon supersonic velocity difference. Even with these improvements, these are still difficult times to simulate accurately as the Jeans length of the cold intergalactic gas must be resolved while also capturing a representative sample of the universe. We explore the box size and resolution requirements to meet these competing objectives. Our simulations support the finding of recent studies that the dark matter-baryon velocity difference has a surprisingly large impact on the accretion of gas onto the first star-forming minihalos (which have masses of ~106 M ⊙). In fact, the halo gas is often significantly downwind of such halos and with lower densities in the simulations in which the baryons have a bulk flow with respect to the dark matter, modulating the formation of the first stars by the local value of this velocity difference. We also show that dynamical friction plays an important role in the nonlinear evolution of the dark matter-baryon differential velocity, acting to erase this velocity difference quickly in overdense gas, as well as sourcing visually apparent bow shocks and Mach cones throughout the universe. We use simulations with both the GADGET and Enzo cosmological codes to test the robustness of these conclusions. The comparison of these codes' simulations also provides a relatively controlled test of these codes themselves, allowing us to quantify some of the tradeoffs between the algorithms. For example, we find that particle coupling in GADGET between the gas and dark matter particles results in spurious growth that mimics nonlinear growth in the matter power spectrum for standard initial setups. This coupling is alleviated by using adaptive gravitational softening for the gas. In a companion paper, we use the simulations presented here to make detailed estimates for the impact of the dark matter-baryon velocity differential on redshifted 21 cm radiation. The initial conditions generator used in this study, CICSASS, can be publicly downloaded.

  13. THE FORMATION OF THE PRIMITIVE STAR SDSS J102915+172927: EFFECT OF THE DUST MASS AND THE GRAIN-SIZE DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovino, S.; Banerjee, R.; Grassi, T.

    Understanding the formation of the extremely metal-poor star SDSS J102915+172927 is of fundamental importance to improve our knowledge on the transition between the first and second generation of stars in the universe. In this paper, we perform three-dimensional cosmological hydrodynamical simulations of dust-enriched halos during the early stages of the collapse process including a detailed treatment of the dust physics. We employ the astrochemistry package krome coupled with the hydrodynamical code enzo assuming grain-size distributions produced by the explosion of core-collapse supernovae (SNe) of 20 and 35 M {sub ⊙} primordial stars, which are suitable to reproduce the chemical patternmore » of the SDSS J102915+172927 star. We find that the dust mass yield produced from Population III SNe explosions is the most important factor that drives the thermal evolution and the dynamical properties of the halos. Hence, for the specific distributions relevant in this context, the composition, the dust optical properties, and the size range have only minor effects on the results due to similar cooling functions. We also show that the critical dust mass to enable fragmentation provided by semi-analytical models should be revised, as we obtain values one order of magnitude larger. This determines the transition from disk fragmentation to a more filamentary fragmentation mode, and suggests that likely more than one single SN event or efficient dust growth should be invoked to get such high dust content.« less

  14. Radiation-driven Turbulent Accretion onto Massive Black Holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, KwangHo; Wise, John H.; Bogdanović, Tamara, E-mail: kwangho.park@physics.gatech.edu

    Accretion of gas and interaction of matter and radiation are at the heart of many questions pertaining to black hole (BH) growth and coevolution of massive BHs and their host galaxies. To answer them, it is critical to quantify how the ionizing radiation that emanates from the innermost regions of the BH accretion flow couples to the surrounding medium and how it regulates the BH fueling. In this work, we use high-resolution three-dimensional (3D) radiation-hydrodynamic simulations with the code Enzo , equipped with adaptive ray-tracing module Moray , to investigate radiation-regulated BH accretion of cold gas. Our simulations reproduce findingsmore » from an earlier generation of 1D/2D simulations: the accretion-powered UV and X-ray radiation forms a highly ionized bubble, which leads to suppression of BH accretion rate characterized by quasi-periodic outbursts. A new feature revealed by the 3D simulations is the highly turbulent nature of the gas flow in vicinity of the ionization front. During quiescent periods between accretion outbursts, the ionized bubble shrinks in size and the gas density that precedes the ionization front increases. Consequently, the 3D simulations show oscillations in the accretion rate of only ∼2–3 orders of magnitude, significantly smaller than 1D/2D models. We calculate the energy budget of the gas flow and find that turbulence is the main contributor to the kinetic energy of the gas but corresponds to less than 10% of its thermal energy and thus does not contribute significantly to the pressure support of the gas.« less

  15. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  16. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    PubMed Central

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  17. Constructing a Pre-Emptive System Based on a Multidimentional Matrix and Autocompletion to Improve Diagnostic Coding in Acute Care Hospitals.

    PubMed

    Noussa-Yao, Joseph; Heudes, Didier; Escudie, Jean-Baptiste; Degoulet, Patrice

    2016-01-01

    Short-stay MSO (Medicine, Surgery, Obstetrics) hospitalization activities in public and private hospitals providing public services are funded through charges for the services provided (T2A in French). Coding must be well matched to the severity of the patient's condition, to ensure that appropriate funding is provided to the hospital. We propose the use of an autocompletion process and multidimensional matrix, to help physicians to improve the expression of information and to optimize clinical coding. With this approach, physicians without knowledge of the encoding rules begin from a rough concept, which is gradually refined through semantic proximity and uses information on the associated codes stemming of optimized knowledge bases of diagnosis code.

  18. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  19. Linking theory with qualitative research through study of stroke caregiving families.

    PubMed

    Pierce, Linda L; Steiner, Victoria; Cervantez Thompson, Teresa L; Friedemann, Marie-Luise

    2014-01-01

    This theoretical article outlines the deliberate process of applying a qualitative data analysis method rooted in Friedemann's Framework of Systemic Organization through the study of a web-based education and support intervention for stroke caregiving families. Directed by Friedemann's framework, the analytic method involved developing, refining, and using a coding rubric to explore interactive patterns between caregivers and care recipients from this 3-month feasibility study using this education and support intervention. Specifically, data were gathered from the intervention's web-based discussion component between caregivers and the nurse specialist, as well as from telephone caregiver interviews. A theoretical framework guided the process of developing and refining this coding rubric for the purpose of organizing data; but, more importantly, guided the investigators' thought processes, allowing them to extract rich information from the data set, as well as synthesize this information to generate a broad understanding of the caring situation. © 2013 Association of Rehabilitation Nurses.

  20. A PARAMETRIC STUDY OF BCS RF SURFACE IMPEDANCE WITH MAGNETIC FIELD USING THE XIAO CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reece, Charles E.; Xiao, Binping

    2013-09-01

    A recent new analysis of field-dependent BCS rf surface impedance based on moving Cooper pairs has been presented.[1] Using this analysis coded in Mathematica TM, survey calculations have been completed which examine the sensitivities of this surface impedance to variation of the BCS material parameters and temperature. The results present a refined description of the "best theoretical" performance available to potential applications with corresponding materials.

  1. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  2. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul

    2015-11-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.

  3. QX MAN: Q and X file manipulation

    NASA Technical Reports Server (NTRS)

    Krein, Mark A.

    1992-01-01

    QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.

  4. The next-generation ESL continuum gyrokinetic edge code

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.

    2009-05-01

    The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.

  5. Sierra/Aria 4.48 Verification Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal Fluid Development Team

    Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.

  6. The Healthcare Complaints Analysis Tool: development and reliability testing of a method for service monitoring and organisational learning

    PubMed Central

    Gillespie, Alex; Reader, Tom W

    2016-01-01

    Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496

  7. Amendment of Articles 8, 9, 10, 21 and 78 of the International Code of Zoological Nomenclature to expand and refine methods of publication

    PubMed Central

    International Commission on Zoological Nomenclature

    2012-01-01

    Abstract The International Commission on Zoological Nomenclature has voted in favour of a revised version of the amendment to the International Code of Zoological Nomenclature that was proposed in 2008. The purpose of the amendment is to expand and refine the methods of publication allowed by the Code, particularly in relation to electronic publication. The amendment establishes an Official Register of Zoological Nomenclature (with ZooBank as its online version), allows electronic publication after 2011 under certain conditions, and disallows publication on optical discs after 2012. The requirements for electronic publications are that the work be registered in ZooBank before it is published, that the work itself state the date of publication and contain evidence that registration has occurred, and that the ZooBank registration state both the name of an electronic archive intended to preserve the work and the ISSN or ISBN associated with the work. Registration of new scientific names and nomenclatural acts is not required. The Commission has confirmed that ZooBank is ready to handle the requirements of the amendment. PMID:22977348

  8. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  9. Discriminative object tracking via sparse representation and online dictionary learning.

    PubMed

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  10. Continuum Vlasov Simulation in Four Phase-space Dimensions

    NASA Astrophysics Data System (ADS)

    Cohen, B. I.; Banks, J. W.; Berger, R. L.; Hittinger, J. A.; Brunner, S.

    2010-11-01

    In the VALHALLA project, we are developing scalable algorithms for the continuum solution of the Vlasov-Maxwell equations in two spatial and two velocity dimensions. We use fourth-order temporal and spatial discretizations of the conservative form of the equations and a finite-volume representation to enable adaptive mesh refinement and nonlinear oscillation control [1]. The code has been implemented with and without adaptive mesh refinement, and with electromagnetic and electrostatic field solvers. A goal is to study the efficacy of continuum Vlasov simulations in four phase-space dimensions for laser-plasma interactions. We have verified the code in examples such as the two-stream instability, the weak beam-plasma instability, Landau damping, electron plasma waves with electron trapping and nonlinear frequency shifts [2]^ extended from 1D to 2D propagation, and light wave propagation.^ We will report progress on code development, computational methods, and physics applications. This work was performed under the auspices of the U.S. DOE by LLNL under contract no. DE-AC52-07NA27344. This work was funded by the Lab. Dir. Res. and Dev. Prog. at LLNL under project tracking code 08-ERD-031. [1] J.W. Banks and J.A.F. Hittinger, to appear in IEEE Trans. Plas. Sci. (Sept., 2010). [2] G.J. Morales and T.M. O'Neil, Phys. Rev. Lett. 28,417 (1972); R. L. Dewar, Phys. Fluids 15,712 (1972).

  11. Preliminary SAGE Simulations of Volcanic Jets Into a Stratified Atmosphere

    NASA Astrophysics Data System (ADS)

    Peterson, A. H.; Wohletz, K. H.; Ogden, D. E.; Gisler, G. R.; Glatzmaier, G. A.

    2007-12-01

    The SAGE (SAIC Adaptive Grid Eulerian) code employs adaptive mesh refinement in solving Eulerian equations of complex fluid flow desirable for simulation of volcanic eruptions. The goal of modeling volcanic eruptions is to better develop a code's predictive capabilities in order to understand the dynamics that govern the overall behavior of real eruption columns. To achieve this goal, we focus on the dynamics of underexpended jets, one of the fundamental physical processes important to explosive eruptions. Previous simulations of laboratory jets modeled in cylindrical coordinates were benchmarked with simulations in CFDLib (Los Alamos National Laboratory), which solves the full Navier-Stokes equations (includes viscous stress tensor), and showed close agreement, indicating that adaptive mesh refinement used in SAGE may offset the need for explicit calculation of viscous dissipation.We compare gas density contours of these previous simulations with the same initial conditions in cylindrical and Cartesian geometries to laboratory experiments to determine both the validity of the model and the robustness of the code. The SAGE results in both geometries are within several percent of the experiments for position and density of the incident (intercepting) and reflected shocks, slip lines, shear layers, and Mach disk. To expand our study into a volcanic regime, we simulate large-scale jets in a stratified atmosphere to establish the code's ability to model a sustained jet into a stable atmosphere.

  12. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  13. The Early Development of Programmable Machinery.

    ERIC Educational Resources Information Center

    Collins, Martin D.

    1985-01-01

    Programmable equipment innovations, precursors of today's technology, are examined, including the development of the binary code and feedback control systems, such as temperature sensing devices, interchangeable parts, punched cards carrying instructions, continuous flow oil refining process, assembly lines for mass production, and the…

  14. Numerical algorithm comparison for the accurate and efficient computation of high-incidence vortical flow

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    1991-01-01

    Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.

  15. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  16. Towards industrial-strength Navier-Stokes codes

    NASA Technical Reports Server (NTRS)

    Jou, Wen-Huei; Wigton, Laurence B.; Allmaras, Steven R.

    1992-01-01

    In this paper we discuss our experiences with Navier-Stokes (NS) codes using central differencing (CD) and scalar artificial dissipation (SAD). The NS-CDSAD codes have been developed by several researchers. Our results confirm that for typical commercial transport wing and wing/body configurations flying at transonic conditions with all turbulent boundary layers, NS-CDSAD codes, when used with the Johnson-King turbulence model, are capable of computing pressure distributions in excellent agreement with experimental data. However, results are not as good when laminar boundary layers are present. Exhaustive 2-D grid refinement studies supported by detailed analysis suggest that the numerical errors associated with SAD severely contaminate the solution in the laminar portion of the boundary layer. It is left as a challenge to the CFD community to find and fix the problems with Navier-Stokes codes and to produce a NS code which converges reliably and properly captures the laminar portion of the boundary layer on a reasonable grid.

  17. Efficient parallel seismic simulations including topography and 3-D material heterogeneities on locally refined composite grids

    NASA Astrophysics Data System (ADS)

    Petersson, Anders; Rodgers, Arthur

    2010-05-01

    The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.

  18. LOOPREF: A Fluid Code for the Simulation of Coronal Loops

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda; Antiochos, Spiro; Spicer, Daniel

    1998-01-01

    This report documents the code LOOPREF. LOOPREF is a semi-one dimensional finite element code that is especially well suited to simulate coronal-loop phenomena. It has a full implementation of adaptive mesh refinement (AMR), which is crucial for this type of simulation. The AMR routines are an improved version of AMR1D. LOOPREF's versatility makes is suitable to simulate a wide variety of problems. In addition to efficiently providing very high resolution in rapidly changing regions of the domain, it is equipped to treat loops of variable cross section, any non-linear form of heat conduction, shocks, gravitational effects, and radiative loss.

  19. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  20. Implementation of Hydrodynamic Simulation Code in Shock Experiment Design for Alkali Metals

    NASA Astrophysics Data System (ADS)

    Coleman, A. L.; Briggs, R.; Gorman, M. G.; Ali, S.; Lazicki, A.; Swift, D. C.; Stubley, P. G.; McBride, E. E.; Collins, G.; Wark, J. S.; McMahon, M. I.

    2017-10-01

    Shock compression techniques enable the investigation of extreme P-T states. In order to probe off-Hugoniot regions of P-T space, target makeup and laser pulse parameters must be carefully designed. HYADES is a hydrodynamic simulation code which has been successfully utilised to simulate shock compression events and refine the experimental parameters required in order to explore new P-T states in alkali metals. Here we describe simulations and experiments on potassium, along with the techniques required to access off-Hugoniot states.

  1. Cosmological Origins of Water

    NASA Astrophysics Data System (ADS)

    Gagliano, Alexander; Taylor, Morgan; Black, William; Smidt, Joseph; Wiggins, Brandon K.

    2018-01-01

    Recent models indicate that the sun's protoplanetary disk provided insufficient pathways for water formation, as evidenced by [D/H]H2O measurements in asteroids and Earth's oceans. It is therefore likely that the early universe contained sites conducive to water chemistry. This research tracks the timeline and abundance rates of water using cosmological simulations in Enzo. A 64 Mpc cube of space is evolved from z = 200 to z = 2. Simulations are then centered on a massive halo, and a 26-species reaction network is applied using operator split to track water formation rates. Density projection plots with metallicity contours predict regions of water formation, which are then compared to simulated abundances at both galactic and extragalactic scales. Observational signatures of formation sites are further discussed, and allow for additional validation of the simulations used.

  2. Investigating a method of producing "red and dead" galaxies

    NASA Astrophysics Data System (ADS)

    Skory, Stephen

    2010-08-01

    In optical wavelengths, galaxies are observed to be either red or blue. The overall color of a galaxy is due to the distribution of the ages of its stellar population. Galaxies with currently active star formation appear blue, while those with no recent star formation at all (greater than about a Gyr) have only old, red stars. This strong bimodality has lead to the idea of star formation quenching, and various proposed physical mechanisms. In this dissertation, I attempt to reproduce with Enzo the results of Naab et al. (2007), in which red and dead galaxies are formed using gravitational quenching, rather than with one of the more typical methods of quenching. My initial attempts are unsuccessful, and I explore the reasons why I think they failed. Then using simpler methods better suited to Enzo + AMR, I am successful in producing a galaxy that appears to be similar in color and formation history to those in Naab et al. However, quenching is achieved using unphysically high star formation efficiencies, which is a different mechanism than Naab et al. suggests. Preliminary results of a much higher resolution, follow-on simulation of the above show some possible contradiction with the results of Naab et al. Cold gas is streaming into the galaxy to fuel starbursts, while at a similar epoch the galaxies in Naab et al. have largely already ceased forming stars in the galaxy. On the other hand, the results of the high resolution simulation are qualitatively similar to other works in the literature that show a somewhat different gravitational quenching mechanism than Naab et al. I also discuss my work using halo finders to analyze simulated cosmological data, and my work improving the Enzo/AMR analysis tool "yt". This includes two parallelizations of the halo finder HOP (Eisenstein and Hut, 1998) which allows analysis of very large cosmological datasets on parallel machines. The first version is "yt-HOP," which works well for datasets between about 2563 and 5123 particles, but has memory bottlenecks as the datasets get larger. These bottlenecks inspired the second version, "Parallel HOP," which is a fully parallelized method and implementation of HOP that has worked on datasets with more than 20483 particles on hundreds of processing cores. Both methods are described in detail, as are the various effects of performance-related runtime options. Additionally, both halo finders are subjected to a full suite of performance benchmarks varying both dataset sizes and computational resources used. I conclude with descriptions of four new tools I added to yt. A Parallel Structure Function Generator allows analysis of two-point functions, such as correlation functions, using memory- and workload-parallelism. A Parallel Merger Tree Generator leverages the parallel halo finders in yt, such as Parallel HOP, to build the merger tree of halos in a cosmological simulation, and outputs the result to a SQLite database for simple and powerful data extraction. A Star Particle Analysis toolkit takes a group of star particles and can output the rate of formation as a function of time, and/or a synthetic Spectral Energy Distribution (S.E.D.) using the Bruzual and Charlot (2003) data tables. Finally, a Halo Mass Function toolkit takes as input a list of halo masses and can output the halo mass function for the halos, as well as an analytical fit for those halos using several previously published fits.

  3. 40 CFR 261.4 - Exclusions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... exclusion from the definition of solid waste. (18) Petrochemical recovered oil from an associated organic... for benzene (§ 261.24, waste code D018); and (ii) The oil generated by the organic chemical... petroleum refining process. An “associated organic chemical manufacturing facility” is a facility where the...

  4. Albany v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salinger, Andrew; Phipps, Eric; Ostien, Jakob

    2016-01-13

    The Albany code is a general-purpose finite element code for solving partial differential equations (PDEs). Albany is a research code that demonstrates how a PDE code can be built by interfacing many of the open-source software libraries that are released under Sandia's Trilinos project. Part of the mission of Albany is to be a testbed for new Trilinos libraries, to refine their methods, usability, and interfaces. Albany includes hooks to optimization and uncertainty quantification algorithms, including those in Trilinos as well as those in the Dakota toolkit. Because of this, Albany is a desirable starting point for new code developmentmore » efforts that wish to make heavy use of Trilinos. Albany is both a framework and the host for specific finite element applications. These applications have project names, and can be controlled by configuration option when the code is compiled, but are all developed and released as part of the single Albany code base, These include LCM, QCAD, FELIX, Aeras, and ATO applications.« less

  5. Refining the accuracy of validated target identification through coding variant fine-mapping in type 2 diabetes.

    PubMed

    Mahajan, Anubha; Wessel, Jennifer; Willems, Sara M; Zhao, Wei; Robertson, Neil R; Chu, Audrey Y; Gan, Wei; Kitajima, Hidetoshi; Taliun, Daniel; Rayner, N William; Guo, Xiuqing; Lu, Yingchang; Li, Man; Jensen, Richard A; Hu, Yao; Huo, Shaofeng; Lohman, Kurt K; Zhang, Weihua; Cook, James P; Prins, Bram Peter; Flannick, Jason; Grarup, Niels; Trubetskoy, Vassily Vladimirovich; Kravic, Jasmina; Kim, Young Jin; Rybin, Denis V; Yaghootkar, Hanieh; Müller-Nurasyid, Martina; Meidtner, Karina; Li-Gao, Ruifang; Varga, Tibor V; Marten, Jonathan; Li, Jin; Smith, Albert Vernon; An, Ping; Ligthart, Symen; Gustafsson, Stefan; Malerba, Giovanni; Demirkan, Ayse; Tajes, Juan Fernandez; Steinthorsdottir, Valgerdur; Wuttke, Matthias; Lecoeur, Cécile; Preuss, Michael; Bielak, Lawrence F; Graff, Marielisa; Highland, Heather M; Justice, Anne E; Liu, Dajiang J; Marouli, Eirini; Peloso, Gina Marie; Warren, Helen R; Afaq, Saima; Afzal, Shoaib; Ahlqvist, Emma; Almgren, Peter; Amin, Najaf; Bang, Lia B; Bertoni, Alain G; Bombieri, Cristina; Bork-Jensen, Jette; Brandslund, Ivan; Brody, Jennifer A; Burtt, Noël P; Canouil, Mickaël; Chen, Yii-Der Ida; Cho, Yoon Shin; Christensen, Cramer; Eastwood, Sophie V; Eckardt, Kai-Uwe; Fischer, Krista; Gambaro, Giovanni; Giedraitis, Vilmantas; Grove, Megan L; de Haan, Hugoline G; Hackinger, Sophie; Hai, Yang; Han, Sohee; Tybjærg-Hansen, Anne; Hivert, Marie-France; Isomaa, Bo; Jäger, Susanne; Jørgensen, Marit E; Jørgensen, Torben; Käräjämäki, Annemari; Kim, Bong-Jo; Kim, Sung Soo; Koistinen, Heikki A; Kovacs, Peter; Kriebel, Jennifer; Kronenberg, Florian; Läll, Kristi; Lange, Leslie A; Lee, Jung-Jin; Lehne, Benjamin; Li, Huaixing; Lin, Keng-Hung; Linneberg, Allan; Liu, Ching-Ti; Liu, Jun; Loh, Marie; Mägi, Reedik; Mamakou, Vasiliki; McKean-Cowdin, Roberta; Nadkarni, Girish; Neville, Matt; Nielsen, Sune F; Ntalla, Ioanna; Peyser, Patricia A; Rathmann, Wolfgang; Rice, Kenneth; Rich, Stephen S; Rode, Line; Rolandsson, Olov; Schönherr, Sebastian; Selvin, Elizabeth; Small, Kerrin S; Stančáková, Alena; Surendran, Praveen; Taylor, Kent D; Teslovich, Tanya M; Thorand, Barbara; Thorleifsson, Gudmar; Tin, Adrienne; Tönjes, Anke; Varbo, Anette; Witte, Daniel R; Wood, Andrew R; Yajnik, Pranav; Yao, Jie; Yengo, Loïc; Young, Robin; Amouyel, Philippe; Boeing, Heiner; Boerwinkle, Eric; Bottinger, Erwin P; Chowdhury, Rajiv; Collins, Francis S; Dedoussis, George; Dehghan, Abbas; Deloukas, Panos; Ferrario, Marco M; Ferrières, Jean; Florez, Jose C; Frossard, Philippe; Gudnason, Vilmundur; Harris, Tamara B; Heckbert, Susan R; Howson, Joanna M M; Ingelsson, Martin; Kathiresan, Sekar; Kee, Frank; Kuusisto, Johanna; Langenberg, Claudia; Launer, Lenore J; Lindgren, Cecilia M; Männistö, Satu; Meitinger, Thomas; Melander, Olle; Mohlke, Karen L; Moitry, Marie; Morris, Andrew D; Murray, Alison D; de Mutsert, Renée; Orho-Melander, Marju; Owen, Katharine R; Perola, Markus; Peters, Annette; Province, Michael A; Rasheed, Asif; Ridker, Paul M; Rivadineira, Fernando; Rosendaal, Frits R; Rosengren, Anders H; Salomaa, Veikko; Sheu, Wayne H-H; Sladek, Rob; Smith, Blair H; Strauch, Konstantin; Uitterlinden, André G; Varma, Rohit; Willer, Cristen J; Blüher, Matthias; Butterworth, Adam S; Chambers, John Campbell; Chasman, Daniel I; Danesh, John; van Duijn, Cornelia; Dupuis, Josée; Franco, Oscar H; Franks, Paul W; Froguel, Philippe; Grallert, Harald; Groop, Leif; Han, Bok-Ghee; Hansen, Torben; Hattersley, Andrew T; Hayward, Caroline; Ingelsson, Erik; Kardia, Sharon L R; Karpe, Fredrik; Kooner, Jaspal Singh; Köttgen, Anna; Kuulasmaa, Kari; Laakso, Markku; Lin, Xu; Lind, Lars; Liu, Yongmei; Loos, Ruth J F; Marchini, Jonathan; Metspalu, Andres; Mook-Kanamori, Dennis; Nordestgaard, Børge G; Palmer, Colin N A; Pankow, James S; Pedersen, Oluf; Psaty, Bruce M; Rauramaa, Rainer; Sattar, Naveed; Schulze, Matthias B; Soranzo, Nicole; Spector, Timothy D; Stefansson, Kari; Stumvoll, Michael; Thorsteinsdottir, Unnur; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Wareham, Nicholas J; Wilson, James G; Zeggini, Eleftheria; Scott, Robert A; Barroso, Inês; Frayling, Timothy M; Goodarzi, Mark O; Meigs, James B; Boehnke, Michael; Saleheen, Danish; Morris, Andrew P; Rotter, Jerome I; McCarthy, Mark I

    2018-04-01

    We aggregated coding variant data for 81,412 type 2 diabetes cases and 370,832 controls of diverse ancestry, identifying 40 coding variant association signals (P < 2.2 × 10 -7 ); of these, 16 map outside known risk-associated loci. We make two important observations. First, only five of these signals are driven by low-frequency variants: even for these, effect sizes are modest (odds ratio ≤1.29). Second, when we used large-scale genome-wide association data to fine-map the associated variants in their regional context, accounting for the global enrichment of complex trait associations in coding sequence, compelling evidence for coding variant causality was obtained for only 16 signals. At 13 others, the associated coding variants clearly represent 'false leads' with potential to generate erroneous mechanistic inference. Coding variant associations offer a direct route to biological insight for complex diseases and identification of validated therapeutic targets; however, appropriate mechanistic inference requires careful specification of their causal contribution to disease predisposition.

  6. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  7. Structures of Neural Correlation and How They Favor Coding

    PubMed Central

    Franke, Felix; Fiscella, Michele; Sevelev, Maksim; Roska, Botond; Hierlemann, Andreas; da Silveira, Rava Azeredo

    2017-01-01

    Summary The neural representation of information suffers from “noise”—the trial-to-trial variability in the response of neurons. The impact of correlated noise upon population coding has been debated, but a direct connection between theory and experiment remains tenuous. Here, we substantiate this connection and propose a refined theoretical picture. Using simultaneous recordings from a population of direction-selective retinal ganglion cells, we demonstrate that coding benefits from noise correlations. The effect is appreciable already in small populations, yet it is a collective phenomenon. Furthermore, the stimulus-dependent structure of correlation is key. We develop simple functional models that capture the stimulus-dependent statistics. We then use them to quantify the performance of population coding, which depends upon interplays of feature sensitivities and noise correlations in the population. Because favorable structures of correlation emerge robustly in circuits with noisy, nonlinear elements, they will arise and benefit coding beyond the confines of retina. PMID:26796692

  8. The feasibility of adapting a population-based asthma-specific job exposure matrix (JEM) to NHANES.

    PubMed

    McHugh, Michelle K; Symanski, Elaine; Pompeii, Lisa A; Delclos, George L

    2010-12-01

    To determine the feasibility of applying a job exposure matrix (JEM) for classifying exposures to 18 asthmagens in the National Health and Nutrition Examination Survey (NHANES), 1999-2004. We cross-referenced 490 National Center for Health Statistics job codes used to develop the 40 NHANES occupation groups with 506 JEM job titles and assessed homogeneity in asthmagen exposure across job codes within each occupation group. In total, 399 job codes corresponded to one JEM job title, 32 to more than one job title, and 59 were not in the JEM. Three occupation groups had the same asthmagen exposure across job codes, 11 had no asthmagen exposure, and 26 groups had heterogeneous exposures across jobs codes. The NHANES classification of occupations limits the use of the JEM to evaluate the association between workplace exposures and asthma and more refined occupational data are needed to enhance work-related injury/illness surveillance efforts.

  9. Development of structured ICD-10 and its application to computer-assisted ICD coding.

    PubMed

    Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko

    2010-01-01

    This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.

  10. Transformation of Graphical ECA Policies into Executable PonderTalk Code

    NASA Astrophysics Data System (ADS)

    Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard

    Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.

  11. CRAC2 model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

  12. Reliability of SNOMED-CT Coding by Three Physicians using Two Terminology Browsers

    PubMed Central

    Chiang, Michael F.; Hwang, John C.; Yu, Alexander C.; Casper, Daniel S.; Cimino, James J.; Starren, Justin

    2006-01-01

    SNOMED-CT has been promoted as a reference terminology for electronic health record (EHR) systems. Many important EHR functions are based on the assumption that medical concepts will be coded consistently by different users. This study is designed to measure agreement among three physicians using two SNOMED-CT terminology browsers to encode 242 concepts from five ophthalmology case presentations in a publicly-available clinical journal. Inter-coder reliability, based on exact coding match by each physician, was 44% using one browser and 53% using the other. Intra-coder reliability testing revealed that a different SNOMED-CT code was obtained up to 55% of the time when the two browsers were used by one user to encode the same concept. These results suggest that the reliability of SNOMED-CT coding is imperfect, and may be a function of browsing methodology. A combination of physician training, terminology refinement, and browser improvement may help increase the reproducibility of SNOMED-CT coding. PMID:17238317

  13. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  14. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  15. Neural correlations enable invariant coding and perception of natural stimuli in weakly electric fish

    PubMed Central

    Metzen, Michael G; Hofmann, Volker; Chacron, Maurice J

    2016-01-01

    Neural representations of behaviorally relevant stimulus features displaying invariance with respect to different contexts are essential for perception. However, the mechanisms mediating their emergence and subsequent refinement remain poorly understood in general. Here, we demonstrate that correlated neural activity allows for the emergence of an invariant representation of natural communication stimuli that is further refined across successive stages of processing in the weakly electric fish Apteronotus leptorhynchus. Importantly, different patterns of input resulting from the same natural communication stimulus occurring in different contexts all gave rise to similar behavioral responses. Our results thus reveal how a generic neural circuit performs an elegant computation that mediates the emergence and refinement of an invariant neural representation of natural stimuli that most likely constitutes a neural correlate of perception. DOI: http://dx.doi.org/10.7554/eLife.12993.001 PMID:27128376

  16. A refined finite element method for bending analysis of laminated plates integrated with piezoelectric fiber-reinforced composite actuators

    NASA Astrophysics Data System (ADS)

    Rouzegar, J.; Abbasi, A.

    2018-03-01

    This research presents a finite element formulation based on four-variable refined plate theory for bending analysis of cross-ply and angle-ply laminated composite plates integrated with a piezoelectric fiber-reinforced composite actuator under electromechanical loading. The four-variable refined plate theory is a simple and efficient higher-order shear deformation theory, which predicts parabolic variation of transverse shear stresses across the plate thickness and satisfies zero traction conditions on the plate free surfaces. The weak form of governing equations is derived using the principle of minimum potential energy, and a 4-node non-conforming rectangular plate element with 8 degrees of freedom per node is introduced for discretizing the domain. Several benchmark problems are solved by the developed MATLAB code and the obtained results are compared with those from exact and other numerical solutions, showing good agreement.

  17. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  18. MUSIC: MUlti-Scale Initial Conditions

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Abel, Tom

    2013-11-01

    MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.

  19. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  20. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    NASA Astrophysics Data System (ADS)

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burton, D.E.; Miller, D.S.; Palmer, T.

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  2. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less

  3. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  4. Advances in Monte-Carlo code TRIPOLI-4®'s treatment of the electromagnetic cascade

    NASA Astrophysics Data System (ADS)

    Mancusi, Davide; Bonin, Alice; Hugot, François-Xavier; Malouch, Fadhel

    2018-01-01

    TRIPOLI-4® is a Monte-Carlo particle-transport code developed at CEA-Saclay (France) that is employed in the domains of nuclear-reactor physics, criticality-safety, shielding/radiation protection and nuclear instrumentation. The goal of this paper is to report on current developments, validation and verification made in TRIPOLI-4 in the electron/positron/photon sector. The new capabilities and improvements concern refinements to the electron transport algorithm, the introduction of a charge-deposition score, the new thick-target bremsstrahlung option, the upgrade of the bremsstrahlung model and the improvement of electron angular straggling at low energy. The importance of each of the developments above is illustrated by comparisons with calculations performed with other codes and with experimental data.

  5. Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction

    NASA Astrophysics Data System (ADS)

    Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.

    2013-12-01

    We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.

  6. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  7. The Clawpack Community of Codes

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.

    2014-12-01

    Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.

  8. Refined modeling of Seattle basin amplification

    NASA Astrophysics Data System (ADS)

    Vidale, J. E.; Wirth, E. A.; Frankel, A. D.; Baker, B.; Thompson, M.; Han, J.; Nasser, M.; Stephenson, W. J.

    2016-12-01

    The Seattle Basin has long been recognized to modulate shaking in western Washington earthquakes (e.g., Frankel, 2007 USGS OFR). The amplification of shaking in such deep sedimentary basins is a challenge to estimate and incorporate into mitigation plans. This project aims to (1) study the influence of basin edges on trapping and amplifying seismic waves, and (2) using the latest earthquake data to refine our models of basin structure. To interrogate the influence of basin edges on ground motion, we use the numerical codes SpecFEM3D and Disfd (finite-difference code from Pengcheng Liu), and an update of the basin model of Stephenson et al. (2007), to calculate synthetic ground motions at frequencies up to 1 Hz. The figure below, for example, shows the amplification relative to a simple 1/r amplitude decay for four sources around of the Seattle Basin (red dots), with an EW-striking 45°-dipping thrust mechanism at 10 km depth. We test the difficulty of simulating motions in the presence of slow materials near the basin edge. Running SpecFEM3D with attenuation is about a third as fast as the finite difference code, and cannot represent sub-element structure (e.g., slow surficial materials) in comparable detail to the finer FD grid, but has the advantages of being able to incorporate topography and water. Modeling 1 Hz energy in the presence of shear wave velocities with a floor of 600 m/s, factor of 2 to 3 velocity contrasts, and sharp basin edges is fraught, both in calculating synthetics and estimating real structure. We plan to incorporate interpretations of local recordings including basin-bottom S-to-P conversions, noise-correlation waveforms, and teleseismic-P-wave reverberations to refine the basin model. Our long-term goal is to reassess with greater accuracy and resolution the spatial pattern of hazard across the Seattle Basin, which includes several quite vulnerable neighborhoods.

  9. Children's Behavior in the Postanesthesia Care Unit: The Development of the Child Behavior Coding System-PACU (CBCS-P)

    PubMed Central

    Tan, Edwin T.; Martin, Sarah R.; Fortier, Michelle A.; Kain, Zeev N.

    2012-01-01

    Objective To develop and validate a behavioral coding measure, the Children's Behavior Coding System-PACU (CBCS-P), for children's distress and nondistress behaviors while in the postanesthesia recovery unit. Methods A multidisciplinary team examined videotapes of children in the PACU and developed a coding scheme that subsequently underwent a refinement process (CBCS-P). To examine the reliability and validity of the coding system, 121 children and their parents were videotaped during their stay in the PACU. Participants were healthy children undergoing elective, outpatient surgery and general anesthesia. The CBCS-P was utilized and objective data from medical charts (analgesic consumption and pain scores) were extracted to establish validity. Results Kappa values indicated good-to-excellent (κ's > .65) interrater reliability of the individual codes. The CBCS-P had good criterion validity when compared to children's analgesic consumption and pain scores. Conclusions The CBCS-P is a reliable, observational coding method that captures children's distress and nondistress postoperative behaviors. These findings highlight the importance of considering context in both the development and application of observational coding schemes. PMID:22167123

  10. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  11. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.

    2014-10-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.

  12. Reference Solutions for Benchmark Turbulent Flows in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.

    2016-01-01

    A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.

  13. Neoclassical orbit calculations with a full-f code for tokamak edge plasmas

    NASA Astrophysics Data System (ADS)

    Rognlien, T. D.; Cohen, R. H.; Dorr, M.; Hittinger, J.; Xu, X. Q.; Collela, P.; Martin, D.

    2008-11-01

    Ion distribution function modifications are considered for the case of neoclassical orbit widths comparable to plasma radial-gradient scale-lengths. Implementation of proper boundary conditions at divertor plates in the continuum TEMPEST code, including the effect of drifts in determining the direction of total flow, enables such calculations in single-null divertor geometry, with and without an electrostatic potential. The resultant poloidal asymmetries in densities, temperatures, and flows are discussed. For long-time simulations, a slow numerical instability develops, even in simplified (circular) geometry with no endloss, which aids identification of the mixed treatment of parallel and radial convection terms as the cause. The new Edge Simulation Laboratory code, expected to be operational, has algorithmic refinements that should address the instability. We will present any available results from the new code on this problem as well as geodesic acoustic mode tests.

  14. Reynolds-Averaged Navier-Stokes Solutions to Flat Plate Film Cooling Scenarios

    NASA Technical Reports Server (NTRS)

    Johnson, Perry L.; Shyam, Vikram; Hah, Chunill

    2011-01-01

    The predictions of several Reynolds-Averaged Navier-Stokes solutions for a baseline film cooling geometry are analyzed and compared with experimental data. The Fluent finite volume code was used to perform the computations with the realizable k-epsilon turbulence model. The film hole was angled at 35 to the crossflow with a Reynolds number of 17,400. Multiple length-to-diameter ratios (1.75 and 3.5) as well as momentum flux ratios (0.125 and 0.5) were simulated with various domains, boundary conditions, and grid refinements. The coolant to mainstream density ratio was maintained at 2.0 for all scenarios. Computational domain and boundary condition variations show the ability to reduce the computational cost as compared to previous studies. A number of grid refinement and coarsening variations are compared for further insights into the reduction of computational cost. Liberal refinement in the near hole region is valuable, especially for higher momentum jets that tend to lift-off and create a recirculating flow. A lack of proper refinement in the near hole region can severely diminish the accuracy of the solution, even in the far region. The effects of momentum ratio and hole length-to-diameter ratio are also discussed.

  15. A Methodology for Optimizing the Training and Utilization of Physical Therapy Personnel.

    ERIC Educational Resources Information Center

    Dumas, Neil S.; Muthard, John E.

    A method for analyzing the work in a department of physical therapy was devised and applied in a teaching hospital. Physical therapists, trained as observer-investigators, helped refine the coding system and were able to reliably record job behavior in the physical therapy department. The nature of the therapist's and aide's job was described and…

  16. Refinement of the X-linked cataract locus (CXN) and gene analysis for CXN and Nance-Horan syndrome (NHS).

    PubMed

    Brooks, Simon; Ebenezer, Neil; Poopalasundaram, Subathra; Maher, Eamonn; Francis, Peter; Moore, Anthony; Hardcastle, Alison

    2004-06-01

    The X-linked congenital cataract (CXN) locus has been mapped to a 3-cM (approximately 3.5 Mb) interval on chromosome Xp22.13, which is syntenic to the mouse cataract disease locus Xcat and encompasses the recently refined Nance-Horan syndrome (NHS) locus. A positional cloning strategy has been adopted to identify the causative gene. In an attempt to refine the CXN locus, seven microsatellites were analysed within 21 individuals of a CXN family. Haplotypes were reconstructed confirming disease segregation with markers on Xp22.13. In addition, a proximal cross-over was observed between markers S3 and S4, thereby refining the CXN disease interval by approximately 400 Kb to 3.2 Mb, flanked by markers DXS9902 and S4. Two known genes (RAI2 and RBBP7) and a novel gene (TL1) were screened for mutations within an affected male from the CXN family and an NHS family by direct sequencing of coding exons and intron- exon splice sites. No mutations or polymorphisms were identified, therefore excluding them as disease-causative in CXN and NHS. In conclusion, the CXN locus has been successfully refined and excludes PPEF1 as a candidate gene. A further three candidates were excluded based on sequence analysis. Future positional cloning efforts will focus on the region of overlap between CXN, Xcat, and NHS.

  17. GWM-2005 - A Groundwater-Management Process for MODFLOW-2005 with Local Grid Refinement (LGR) Capability

    USGS Publications Warehouse

    Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.

    2009-01-01

    This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.

  18. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  19. Vector Potential Generation for Numerical Relativity Simulations

    NASA Astrophysics Data System (ADS)

    Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian

    2017-01-01

    Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436

  20. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  1. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  2. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  3. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  4. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033

    2016-09-15

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less

  5. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mignone, A.; Tzeferacos, P.; Zanni, C.

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory,more » or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.« less

  6. Application of FUN3D Solver for Aeroacoustics Simulation of a Nose Landing Gear Configuration

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.

    2011-01-01

    Numerical simulations have been performed for a nose landing gear configuration corresponding to the experimental tests conducted in the Basic Aerodynamic Research Tunnel at NASA Langley Research Center. A widely used unstructured grid code, FUN3D, is examined for solving the unsteady flow field associated with this configuration. A series of successively finer unstructured grids has been generated to assess the effect of grid refinement. Solutions have been obtained on purely tetrahedral grids as well as mixed element grids using hybrid RANS/LES turbulence models. The agreement of FUN3D solutions with experimental data on the same size mesh is better on mixed element grids compared to pure tetrahedral grids, and in general improves with grid refinement.

  7. Coping with changing controlled vocabularies.

    PubMed Central

    Cimino, J. J.; Clayton, P. D.

    1994-01-01

    For the foreseeable future, controlled medical vocabularies will be in a constant state of development, expansion and refinement. Changes in controlled vocabularies must be reconciled with historical patient information which is coded using those vocabularies and stored in clinical databases. This paper explores the kinds of changes that can occur in controlled vocabularies, including adding terms (simple additions, refinements, redundancy and disambiguation), deleting terms, changing terms (major and minor name changes), and other special situations (obsolescence, discovering redundancy, and precoordination). Examples are drawn from actual changes appearing in the 1993 update to the International Classification of Diseases (ICD9-CM). The methods being used at Columbia-Presbyterian Medical Center to reconcile its Medical Entities Dictionary and its clinical database are discussed. PMID:7949906

  8. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  9. Spontaneous mutual ordering of nucleic acids and proteins.

    PubMed

    Wills, Peter R

    2014-12-01

    It is proposed that the prebiotic ordering of nucleic acid and peptide sequences was a cooperative process in which nearly random populations of both kinds of polymers went through a codependent series of self-organisation events that simultaneously refined not only the accuracy of genetic replication and coding but also the functional specificity of protein catalysts, especially nascent aminoacyl-tRNA synthetase "urzymes".

  10. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  11. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  12. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  13. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    NASA Technical Reports Server (NTRS)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  14. Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.

    2002-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  15. Verification of fluid-structure-interaction algorithms through the method of manufactured solutions for actuator-line applications

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Ganesh; Sprague, Michael

    2017-11-01

    Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  16. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  17. On a High-Fidelity Hierarchical Approach to Buckling Load Calculations

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.

    2001-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  18. Testing and refining the Science in Risk Assessment and Policy (SciRAP) web-based platform for evaluating the reliability and relevance of in vivo toxicity studies.

    PubMed

    Beronius, Anna; Molander, Linda; Zilliacus, Johanna; Rudén, Christina; Hanberg, Annika

    2018-05-28

    The Science in Risk Assessment and Policy (SciRAP) web-based platform was developed to promote and facilitate structure and transparency in the evaluation of ecotoxicity and toxicity studies for hazard and risk assessment of chemicals. The platform includes sets of criteria and a colour-coding tool for evaluating the reliability and relevance of individual studies. The SciRAP method for evaluating in vivo toxicity studies was first published in 2014 and the aim of the work presented here was to evaluate and develop that method further. Toxicologists and risk assessors from different sectors and geographical areas were invited to test the SciRAP criteria and tool on a specific set of in vivo toxicity studies and to provide feedback concerning the scientific soundness and user-friendliness of the SciRAP approach. The results of this expert assessment were used to refine and improve both the evaluation criteria and the colour-coding tool. It is expected that the SciRAP web-based platform will continue to be developed and enhanced to keep up to date with the needs of end-users. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Simplifying healthful choices: a qualitative study of a physical activity based nutrition label format

    PubMed Central

    2013-01-01

    Background This study used focus groups to pilot and evaluate a new nutrition label format and refine the label design. Physical activity equivalent labels present calorie information in terms of the amount of physical activity that would be required to expend the calories in a specified food item. Methods Three focus groups with a total of twenty participants discussed food choices and nutrition labeling. They provided information on comprehension, usability and acceptability of the label. A systematic coding process was used to apply descriptive codes to the data and to identify emerging themes and attitudes. Results Participants in all three groups were able to comprehend the label format. Discussion about label format focused on issues including gender of the depicted figure, physical fitness of the figure, preference for walking or running labels, and preference for information in miles or minutes. Feedback from earlier focus groups was used to refine the labels in an iterative process. Conclusions In contrast to calorie labels, participants shown physical activity labels asked and answered, “How does this label apply to me?” This shift toward personalized understanding may indicate that physical activity labels offer an advantage over currently available nutrition labels. PMID:23742678

  20. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  1. Aerodynamic analysis of three advanced configurations using the TranAir full-potential code

    NASA Technical Reports Server (NTRS)

    Madson, M. D.; Carmichael, R. L.; Mendoza, J. P.

    1989-01-01

    Computational results are presented for three advanced configurations: the F-16A with wing tip missiles and under wing fuel tanks, the Oblique Wing Research Aircraft, and an Advanced Turboprop research model. These results were generated by the latest version of the TranAir full potential code, which solves for transonic flow over complex configurations. TranAir embeds a surface paneled geometry definition in a uniform rectangular flow field grid, thus avoiding the use of surface conforming grids, and decoupling the grid generation process from the definition of the configuration. The new version of the code locally refines the uniform grid near the surface of the geometry, based on local panel size and/or user input. This method distributes the flow field grid points much more efficiently than the previous version of the code, which solved for a grid that was uniform everywhere in the flow field. TranAir results are presented for the three configurations and are compared with wind tunnel data.

  2. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  3. Hypersonic vehicle simulation model: Winged-cone configuration

    NASA Technical Reports Server (NTRS)

    Shaughnessy, John D.; Pinckney, S. Zane; Mcminn, John D.; Cruz, Christopher I.; Kelley, Marie-Louise

    1990-01-01

    Aerodynamic, propulsion, and mass models for a generic, horizontal-takeoff, single-stage-to-orbit (SSTO) configuration are presented which are suitable for use in point mass as well as batch and real-time six degree-of-freedom simulations. The simulations can be used to investigate ascent performance issues and to allow research, refinement, and evaluation of integrated guidance/flight/propulsion/thermal control systems, design concepts, and methodologies for SSTO missions. Aerodynamic force and moment coefficients are given as functions of angle of attack, Mach number, and control surface deflections. The model data were estimated by using a subsonic/supersonic panel code and a hypersonic local surface inclination code. Thrust coefficient and engine specific impulse were estimated using a two-dimensional forebody, inlet, nozzle code and a one-dimensional combustor code and are given as functions of Mach number, dynamic pressure, and fuel equivalence ratio. Rigid-body mass moments of inertia and center of gravity location are functions of vehicle weight which is in turn a function of fuel flow.

  4. Disruption of Giant Molecular Clouds by Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Harper-Clark, Elizabeth

    The lifetime of a Giant Molecular Cloud (GMC) and the total mass of stars that form within it are crucial to the understanding of star formation rates across a whole galaxy. In particular, the stars within a GMC may dictate its disruption and the quenching of further star formation. Indeed, observations show that the Milky Way contains GMCs with extensive expanding bubbles while the most massive stars are still alive. Simulating entire GMCs is challenging, due to the large variety of physics that needs to be included, and the computational power required to accurately simulate a GMC over tens of millions of years. Using the radiative-magneto-hydrodynamic code Enzo, I have run many simulations of GMCs. I obtain robust results for the fraction of gas converted into stars and the lifetimes of the GMCs: (A) In simulations with no stellar outputs (or "feedback''), clusters form at a rate of 30% of GMC mass per free fall time; the GMCs were not disrupted but contained forming stars. (B) Including ionization gas pressure or radiation pressure into the simulations, both separately and together, the star formation was quenched at between 5% and 21% of the original GMC mass. The clouds were fully disrupted within two dynamical times after the first cluster formed. The radiation pressure contributed the most to the disruption of the GMC and fully quenched star formation even without ionization. (C) Simulations that included supernovae showed that they are not dynamically important to GMC disruption and have only minor effects on subsequent star formation. (D) The inclusion of a few micro Gauss magnetic field across the cloud slightly reduced the star formation rate but accelerated GMC disruption by reducing bubble shell disruption and leaking. These simulations show that new born stars quench further star formation and completely disrupt the parent GMC. The low star formation rate and the short lifetimes of GMCs shown here can explain the low star formation rate across the whole galaxy.

  5. Postnatal reduction of BDNF regulates the developmental remodeling of taste bud innervation

    PubMed Central

    Huang, Tao; Ma, Liqun; Krimm, Robin F

    2015-01-01

    The refinement of innervation is a common developmental mechanism that serves to increase the specificity of connections following initial innervation. In the peripheral gustatory system, the extent to which innervation is refined and how refinement might be regulated is unclear. The initial innervation of taste buds is controlled by brain-derived neurotrophic factor (BDNF). Following initial innervation, taste receptor cells are added and become newly innervated. The connections between the taste receptor cells and nerve fibers are likely to be specific in order to retain peripheral coding mechanisms. Here, we explored the possibility that the down-regulation of BDNF regulates the refinement of taste bud innervation during postnatal development. An analysis of BDNF expression in BdnflacZ/+ mice and real-time reverse transcription polymerase chain reaction (RT-PCR) revealed that BDNF was down-regulated between postnatal day (P) 5 and P10. This reduction in BDNF expression was due to a loss of precursor/progenitor cells that express BDNF, while the expression of BDNF in the subpopulations of taste receptor cells did not change. Gustatory innervation, which was identified by P2X3 immunohistochemistry, was lost around the perimeter where most progenitor/precursor cells are located. In addition, the density of innervation in the taste bud was reduced between P5 and P10, because taste buds increase in size without increasing innervation. This reduction of innervation density was blocked by the overexpression of BDNF in the precursor/progenitor population of taste bud cells. Together these findings indicate that the process of BDNF restriction to a subpopulation of taste receptor cells between P5 and P10, results in a refinement of gustatory innervation. We speculate that this refinement results in an increased specificity of connections between neurons and taste receptor cells during development. PMID:26164656

  6. Structural Genomics of Bacterial Virulence Factors

    DTIC Science & Technology

    2006-05-01

    positioned in the unit cell by Molecular Replacement (Protein Data Bank ( PDB ) ID code 1acc)6 using MOLREP, and refined with REFMAC version 5.0 (ref. 24...increase our understanding of the molecular mechanisms of pathogenicity, putting us in a stronger position to anticipate and react to emerging...term, the accumulated structural information will generate important and testable hypotheses that will increase our understanding of the molecular

  7. Model systems: how chemical biologists study RNA

    PubMed Central

    Rios, Andro C.; Tor, Yitzhak

    2009-01-01

    Ribonucleic acids are structurally and functionally sophisticated biomolecules and the use of models, frequently truncated or modified sequences representing functional domains of the natural systems, is essential to their exploration. Functional non-coding RNAs such as miRNAs, riboswitches, and, in particular, ribozymes, have changed the view of RNA’s role in biology and its catalytic potential. The well-known truncated hammerhead model has recently been refined and new data provide a clearer molecular picture of the elements responsible for its catalytic power. A model for the spliceosome, a massive and highly intricate ribonucleoprotein, is also emerging, although its true utility is yet to be cemented. Such catalytic model systems could also serve as “chemo-paleontological” tools, further refining the RNA world hypothesis and its relevance to the origin and evolution of life. PMID:19879179

  8. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  9. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, W.; Almgren, A.; Bell, J.

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less

  10. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  11. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomov, I; Pember, R; Greenough, J

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less

  12. Planned updates and refinements to the central valley hydrologic model, with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.

  13. Planned updates and refinements to the Central Valley hydrologic model with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.

  14. Lifting the weight of a diagnosis-related groups family change: a comparison between refined and non-refined DRG systems for top-down cost accounting and efficiency indicators.

    PubMed

    Zlotnik, Alexander; Cuchi, Miguel Alfaro; Pérez Pérez, Maria Carmen

    Public healthcare providers in all Spanish Regions - Autonomous Communities (ACs) use All Patients Diagnosis-Related Groups (AP-DRGs) for billing non-insured patients, cost accounting and inpatient efficiency indicators. A national migration to All Patients Refined Diagnosis-Related Groups (APR-DRGs) has been scheduled for 2016. The analysis was performed on 202,912 inpatient care episodes ranging from 2005 to 2010. All episodes were grouped using AP-DRG v25.0 and APR-DRG v24.0. Normalised DRG weight variations for an AP-DRG to APR-DRG migration scenario were calculated and compared. Major differences exist between normalised weights for inpatient episodes depending on the DRGs family used. The usage of the APR-DRG system in Spain without any adjustments, as it was developed in the United States, should be approached with care. In order to avoid reverse incentives and provider financial risks, coding practices should be reviewed and structural differences between DRG families taken into account.

  15. Synthesizing Safety Conditions for Code Certification Using Meta-Level Programming

    NASA Technical Reports Server (NTRS)

    Eusterbrock, Jutta

    2004-01-01

    In code certification the code consumer publishes a safety policy and the code producer generates a proof that the produced code is in compliance with the published safety policy. In this paper, a novel viewpoint approach towards an implementational re-use oriented framework for code certification is taken. It adopts ingredients from Necula's approach for proof-carrying code, but in this work safety properties can be analyzed on a higher code level than assembly language instructions. It consists of three parts: (1) The specification language is extended to include generic pre-conditions that shall ensure safety at all states that can be reached during program execution. Actual safety requirements can be expressed by providing domain-specific definitions for the generic predicates which act as interface to the environment. (2) The Floyd-Hoare inductive assertion method is refined to obtain proof rules that allow the derivation of the proof obligations in terms of the generic safety predicates. (3) A meta-interpreter is designed and experimentally implemented that enables automatic synthesis of proof obligations for submitted programs by applying the modified Floyd-Hoare rules. The proof obligations have two separate conjuncts, one for functional correctness and another for the generic safety obligations. Proof of the generic obligations, having provided the actual safety definitions as context, ensures domain-specific safety of program execution in a particular environment and is simpler than full program verification.

  16. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  17. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  18. Constructing binary black hole initial data with high mass ratios and spins

    NASA Astrophysics Data System (ADS)

    Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald; Szilagyi, Bela; Simulating Extreme Spacetimes Collaboration

    2015-04-01

    Binary black hole systems have now been successfully modelled in full numerical relativity by many groups. In order to explore high-mass-ratio (larger than 1:10), high-spin systems (above 0.9 of the maximal BH spin), we revisit the initial-data problem for binary black holes. The initial-data solver in the Spectral Einstein Code (SpEC) was not able to solve for such initial data reliably and robustly. I will present recent improvements to this solver, among them adaptive mesh refinement and control of motion of the center of mass of the binary, and will discuss the much larger region of parameter space this code can now address.

  19. A new art code for tomographic interferometry

    NASA Technical Reports Server (NTRS)

    Tan, H.; Modarress, D.

    1987-01-01

    A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.

  20. New Directions in Giant Planet Formation

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew

    The proposed research will explore the limits of the core accretion mechanism for forming giant planets, both in terms of timescale and orbital distance. This theoretical research will be useful in interpreting the results of ongoing exoplanet searches. The effects of radiogenic heating and aerodynamic accretion of pebbles and boulders will be included in time-dependent models of atmospheric structure and growth. To investigate these issues, we will develop and publicly share a protoplanet atmospheric evolution code as an extension of the MESA stellar evolution code. By focusing on relevant processes in the early stages of giant planet formation, we can refine model predictions for exoplanet searches at a wide range of stellar ages and distances from the host star.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clough, Katy; Figueras, Pau; Finkel, Hal

    In this work, we introduce GRChombo: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. GRChombo evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. Wemore » show that GRChombo can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.« less

  2. A Computational Study of an Oscillating VR-12 Airfoil with a Gurney Flap

    NASA Technical Reports Server (NTRS)

    Rhee, Myung

    2004-01-01

    Computations of the flow over an oscillating airfoil with a Gurney-flap are performed using a Reynolds Averaged Navier-Stokes code and compared with recent experimental data. The experimental results have been generated for different sizes of the Gurney flaps. The computations are focused mainly on a configuration. The baseline airfoil without a Gurney flap is computed and compared with the experiments in both steady and unsteady cases for the purpose of initial testing of the code performance. The are carried out with different turbulence models. Effects of the grid refinement are also examined and unsteady cases, in addition to the assessment of solver effects. The results of the comparisons of steady lift and drag computations indicate that the code is reasonably accurate in the attached flow of the steady condition but largely overpredicts the lift and underpredicts the drag in the higher angle steady flow.

  3. Taste buds: cells, signals and synapses

    PubMed Central

    Roper, Stephen D.; Chaudhari, Nirupa

    2018-01-01

    The past decade has witnessed a consolidation and refinement of the extraordinary progress made in taste research. This Review describes recent advances in our understanding of taste receptors, taste buds, and the connections between taste buds and sensory afferent fibres. The article discusses new findings regarding the cellular mechanisms for detecting tastes, new data on the transmitters involved in taste processing and new studies that address longstanding arguments about taste coding. PMID:28655883

  4. Updates to Simulation of a Single-Element Lean-Direct Injection Combustor Using Arbitary Polyhedral Meshes

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2015-01-01

    This paper summarizes the procedures of (1) generating control volumes anchored at the nodes of a mesh; and (2) generating staggered control volumes via mesh reconstructions, in terms of either mesh realignment or mesh refinement, as well as presents sample results from their applications to the numerical solution of a single-element LDI combustor using a releasable edition of the National Combustion Code (NCC).

  5. Taste buds: cells, signals and synapses.

    PubMed

    Roper, Stephen D; Chaudhari, Nirupa

    2017-08-01

    The past decade has witnessed a consolidation and refinement of the extraordinary progress made in taste research. This Review describes recent advances in our understanding of taste receptors, taste buds, and the connections between taste buds and sensory afferent fibres. The article discusses new findings regarding the cellular mechanisms for detecting tastes, new data on the transmitters involved in taste processing and new studies that address longstanding arguments about taste coding.

  6. Network Policy Languages: A Survey and a New Approach

    DTIC Science & Technology

    2000-08-01

    go, throwing a community into economic chaos. These events could result from discontinued funding from Silicon Valley investors who became aware of...prevented. C. POLICY HIERARCHIES FOR DISTRIBUTED SYSTEMS MANAGEMENT In [23], Moffett and Sloman form a policy hierarchy is by refining general high...management behavior of a system, without coding the behavior into the manager agents. Lupu and Sloman focus on techniques and tool support for off-line

  7. Enabling Incremental Iterative Development at Scale: Quality Attribute Refinement and Allocation in Practice

    DTIC Science & Technology

    2015-06-01

    abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front

  8. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  9. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  10. Development of a Shared Decision Making coding system for analysis of patient-healthcare provider encounters

    PubMed Central

    Clayman, Marla L.; Makoul, Gregory; Harper, Maya M.; Koby, Danielle G.; Williams, Adam R.

    2012-01-01

    Objectives Describe the development and refinement of a scheme, Detail of Essential Elements and Participants in Shared Decision Making (DEEP-SDM), for coding Shared Decision Making (SDM) while reporting on the characteristics of decisions in a sample of patients with metastatic breast cancer. Methods The Evidence-Based Patient Choice instrument was modified to reflect Makoul and Clayman’s Integrative Model of SDM. Coding was conducted on video recordings of 20 women at the first visit with their medical oncologists after suspicion of disease progression. Noldus Observer XT v.8, a video coding software platform, was used for coding. Results The sample contained 80 decisions (range: 1-11), divided into 150 decision making segments. Most decisions were physician-led, although patients and physicians initiated similar numbers of decision-making conversations. Conclusion DEEP-SDM facilitates content analysis of encounters between women with metastatic breast cancer and their medical oncologists. Despite the fractured nature of decision making, it is possible to identify decision points and to code each of the Essential Elements of Shared Decision Making. Further work should include application of DEEP-SDM to non-cancer encounters. Practice Implications: A better understanding of how decisions unfold in the medical encounter can help inform the relationship of SDM to patient-reported outcomes. PMID:22784391

  11. Residential building codes, affordability, and health protection: a risk-tradeoff approach.

    PubMed

    Hammitt, J K; Belsky, E S; Levy, J I; Graham, J D

    1999-12-01

    Residential building codes intended to promote health and safety may produce unintended countervailing risks by adding to the cost of construction. Higher construction costs increase the price of new homes and may increase health and safety risks through "income" and "stock" effects. The income effect arises because households that purchase a new home have less income remaining for spending on other goods that contribute to health and safety. The stock effect arises because suppression of new-home construction leads to slower replacement of less safe housing units. These countervailing risks are not presently considered in code debates. We demonstrate the feasibility of estimating the approximate magnitude of countervailing risks by combining the income effect with three relatively well understood and significant home-health risks. We estimate that a code change that increases the nationwide cost of constructing and maintaining homes by $150 (0.1% of the average cost to build a single-family home) would induce offsetting risks yielding between 2 and 60 premature fatalities or, including morbidity effects, between 20 and 800 lost quality-adjusted life years (both discounted at 3%) each year the code provision remains in effect. To provide a net health benefit, the code change would need to reduce risk by at least this amount. Future research should refine these estimates, incorporate quantitative uncertainty analysis, and apply a full risk-tradeoff approach to real-world case studies of proposed code changes.

  12. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  13. Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code

    NASA Astrophysics Data System (ADS)

    Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2016-06-01

    We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.

  14. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  15. Simulation and Analysis of Converging Shock Wave Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Shashkov, Mikhail J.

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less

  16. Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme

    NASA Astrophysics Data System (ADS)

    McMahon, Sean

    The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.

  17. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, Jon D.

    1990-01-01

    Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.

  18. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort is to develop a means to use, and to ultimately implement, hp-version finite elements in the numerical solution of optimal control problems. The hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element.

  19. Trinity Phase 2 Open Science: CTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggirello, Kevin Patrick; Vogler, Tracy

    CTH is an Eulerian hydrocode developed by Sandia National Laboratories (SNL) to solve a wide range of shock wave propagation and material deformation problems. Adaptive mesh refinement is also used to improve efficiency for problems with a wide range of spatial scales. The code has a history of running on a variety of computing platforms ranging from desktops to massively parallel distributed-data systems. For the Trinity Phase 2 Open Science campaign, CTH was used to study mesoscale simulations of the hypervelocity penetration of granular SiC powders. The simulations were compared to experimental data. A scaling study of CTH up tomore » 8192 KNL nodes was also performed, and several improvements were made to the code to improve the scalability.« less

  20. Electron-cloud updated simulation results for the PSR, and recent results for the SNS

    NASA Astrophysics Data System (ADS)

    Pivi, M.; Furman, M. A.

    2002-05-01

    Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code.

  1. China’s Comprehensive Approach: Refining the U.S. Targeting Process to Inform U.S. Strategy

    DTIC Science & Technology

    2018-04-20

    control demonstrated by China, the subject matter expertise required to generate a comprehensive approach like China’s does exist. However, due to a vast...with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...code) NATIONAL DEFENSE UNIVERSITY JOINT FORCES STAFF COLLEGE JOINT ADVANCED WARFIGHTING SCHOOL CHINA’S COMPREHENSIVE APPROACH

  2. Automating the design of scientific computing software

    NASA Technical Reports Server (NTRS)

    Kant, Elaine

    1992-01-01

    SINAPSE is a domain-specific software design system that generates code from specifications of equations and algorithm methods. This paper describes the system's design techniques (planning in a space of knowledge-based refinement and optimization rules), user interaction style (user has option to control decision making), and representation of knowledge (rules and objects). It also summarizes how the system knowledge has evolved over time and suggests some issues in building software design systems to facilitate reuse.

  3. Sensory Information Processing and Symbolic Computation

    DTIC Science & Technology

    1973-12-31

    plague all image deblurring methods when working with high signal to noise ratios, is that of a ringing or ghost image phenomenon which surrounds high...Figure 11 The Impulse Response of an All-Pass Random Phase Filter 24 Figure 12 (a) Unsmoothed Log Spectra of the Sentence "The pipe began to...of automatic deblurring of images, linear predictive coding of speech and the refinement and application of mathematical models of human vision and

  4. Hypersonic, nonequilibrium flow over the FIRE 2 forebody at 1634 sec

    NASA Technical Reports Server (NTRS)

    Chambers, Lin Hartung

    1994-01-01

    The numerical simulation of hypersonic flow in thermochemical nonequilibrium over the forebody of the FIRE 2 vehicle at 1634 sec in its trajectory is described. The simulation was executed on a Cray C90 with the program Langley Aerodynamic Upwind Relaxation Algorithm (LAURA) 4.0.2. Code setup procedures and sample results, including grid refinement studies, are discussed. This simulation relates to a study of radiative heating predictions on aerobrake type vehicles.

  5. Hazards and Possibilities of Optical Breakdown Effects Below the Threshold for Shockwave and Bubble Formation

    DTIC Science & Technology

    2006-07-01

    precision of the determination of Rmax, we established a refined method based on the model of bubble formation described above in section 3.6.1 and the...development can be modeled by hydrodynamic codes based on tabulated equation-of-state data . This has previously demonstrated on ps optical breakdown...per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and

  6. On Calculation Methods and Results for Straight Cylindrical Roller Bearing Deflection, Stiffness, and Stress

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2011-01-01

    The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.

  7. Evaluation of the Components of the North Carolina Syndromic Surveillance System Heat Syndrome Case Definition.

    PubMed

    Harduar Morano, Laurel; Waller, Anna E

    To improve heat-related illness surveillance, we evaluated and refined North Carolina's heat syndrome case definition. We analyzed North Carolina emergency department (ED) visits during 2012-2014. We evaluated the current heat syndrome case definition (ie, keywords in chief complaint/triage notes or International Classification of Diseases, Ninth Revision, Clinical Modification [ ICD-9-CM] codes) and additional heat-related inclusion and exclusion keywords. We calculated the positive predictive value and sensitivity of keyword-identified ED visits and manually reviewed ED visits to identify true positives and false positives. The current heat syndrome case definition identified 8928 ED visits; additional inclusion keywords identified another 598 ED visits. Of 4006 keyword-identified ED visits, 3216 (80.3%) were captured by 4 phrases: "heat ex" (n = 1674, 41.8%), "overheat" (n = 646, 16.1%), "too hot" (n = 594, 14.8%), and "heatstroke" (n = 302, 7.5%). Among the 267 ED visits identified by keyword only, a burn diagnosis or the following keywords resulted in a false-positive rate >95%: "burn," "grease," "liquid," "oil," "radiator," "antifreeze," "hot tub," "hot spring," and "sauna." After applying the revised inclusion and exclusion criteria, we identified 9132 heat-related ED visits: 2157 by keyword only, 5493 by ICD-9-CM code only, and 1482 by both (sensitivity = 27.0%, positive predictive value = 40.7%). Cases identified by keywords were strongly correlated with cases identified by ICD-9-CM codes (rho = .94, P < .001). Revising the heat syndrome case definition through the use of additional inclusion and exclusion criteria substantially improved the accuracy of the surveillance system. Other jurisdictions may benefit from refining their heat syndrome case definition.

  8. Toward Effective Shell Modeling of Wrinkled Thin-Film Membranes Exhibiting Stress Concentrations

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Sleight, David W.

    2004-01-01

    Geometrically nonlinear shell finite element analysis has recently been applied to solar-sail membrane problems in order to model the out-of-plane deformations due to structural wrinkling. Whereas certain problems lend themselves to achieving converged nonlinear solutions that compare favorably with experimental observations, solutions to tensioned membranes exhibiting high stress concentrations have been difficult to obtain even with the best nonlinear finite element codes and advanced shell element technology. In this paper, two numerical studies are presented that pave the way to improving the modeling of this class of nonlinear problems. The studies address the issues of mesh refinement and stress-concentration alleviation, and the effects of these modeling strategies on the ability to attain converged nonlinear deformations due to wrinkling. The numerical studies demonstrate that excessive mesh refinement in the regions of stress concentration may be disadvantageous to achieving wrinkled equilibrium states, causing the nonlinear solution to lock in the membrane response mode, while totally discarding the very low-energy bending response that is necessary to cause wrinkling deformation patterns. An element-level, strain-energy density criterion is suggested for facilitating automated, adaptive mesh refinements specifically aimed at the modeling of thin-film membranes undergoing wrinkling deformations.

  9. Adaptive mesh refinement for characteristic grids

    NASA Astrophysics Data System (ADS)

    Thornburg, Jonathan

    2011-05-01

    I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.

  10. Postnatal reduction of BDNF regulates the developmental remodeling of taste bud innervation.

    PubMed

    Huang, Tao; Ma, Liqun; Krimm, Robin F

    2015-09-15

    The refinement of innervation is a common developmental mechanism that serves to increase the specificity of connections following initial innervation. In the peripheral gustatory system, the extent to which innervation is refined and how refinement might be regulated is unclear. The initial innervation of taste buds is controlled by brain-derived neurotrophic factor (BDNF). Following initial innervation, taste receptor cells are added and become newly innervated. The connections between the taste receptor cells and nerve fibers are likely to be specific in order to retain peripheral coding mechanisms. Here, we explored the possibility that the down-regulation of BDNF regulates the refinement of taste bud innervation during postnatal development. An analysis of BDNF expression in Bdnf(lacZ/+) mice and real-time reverse transcription polymerase chain reaction (RT-PCR) revealed that BDNF was down-regulated between postnatal day (P) 5 and P10. This reduction in BDNF expression was due to a loss of precursor/progenitor cells that express BDNF, while the expression of BDNF in the subpopulations of taste receptor cells did not change. Gustatory innervation, which was identified by P2X3 immunohistochemistry, was lost around the perimeter where most progenitor/precursor cells are located. In addition, the density of innervation in the taste bud was reduced between P5 and P10, because taste buds increase in size without increasing innervation. This reduction of innervation density was blocked by the overexpression of BDNF in the precursor/progenitor population of taste bud cells. Together these findings indicate that the process of BDNF restriction to a subpopulation of taste receptor cells between P5 and P10, results in a refinement of gustatory innervation. We speculate that this refinement results in an increased specificity of connections between neurons and taste receptor cells during development. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Simulation of the shallow groundwater-flow system near the Hayward Airport, Sawyer County, Wisconsin

    USGS Publications Warehouse

    Hunt, Randall J.; Juckem, Paul F.; Dunning, Charles P.

    2010-01-01

    There are concerns that removal and trimming of vegetation during expansion of the Hayward Airport in Sawyer County, Wisconsin, could appreciably change the character of a nearby cold-water stream and its adjacent environs. In cooperation with the Wisconsin Department of Transportation, a two-dimensional, steady-state groundwater-flow model of the shallow groundwater-flow system near the Hayward Airport was refined from a regional model of the area. The parameter-estimation code PEST was used to obtain a best fit of the model to additional field data collected in February 2007 as part of this study. The additional data were collected during an extended period of low runoff and consisted of water levels and streamflows near the Hayward Airport. Refinements to the regional model included one additional hydraulic-conductivity zone for the airport area, and three additional parameters for streambed resistance in a northern tributary to the Namekagon River and in the main stem of the Namekagon River. In the refined Hayward Airport area model, the calibrated hydraulic conductivity was 11.2 feet per day, which is within the 58.2 to 7.9 feet per day range reported for the regional glacial and sandstone aquifer, and is consistent with a silty soil texture for the area. The calibrated refined model had a best fit of 8.6 days for the streambed resistance of the Namekagon River and between 0.6 and 1.6 days for the northern tributary stream. The previously reported regional groundwater-recharge rate of 10.1 inches per year was adjusted during calibration of the refined model in order to match streamflows measured during the period of extended low runoff; this resulted in an optimal groundwater-recharge rate of 7.1 inches per year during this period. The refined model was then used to simulate the capture zone of the northern tributary to the Namekagon River.

  12. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  13. Three-dimensional elliptic grid generation for an F-16

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.

    1988-01-01

    A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.

  14. Three-dimensional structure of Erwinia carotovora L-asparaginase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kislitsyn, Yu. A.; Kravchenko, O. V.; Nikonov, S. V.

    2006-10-15

    Three-dimensional structure of Erwinia carotovora L-asparaginase, which has antitumor activity and is used for the treatment of acute lymphoblastic leukemia, was solved at 3 A resolution and refined to R{sub cryst} = 20% and R{sub free} = 28%. Crystals of recombinant Erwinia carotovora L-asparaginase were grown by the hanging-drop vapor-diffusion method from protein solutions in a HEPES buffer (pH 6.5) and PEG MME 5000 solutions in a cacodylate buffer (pH 6.5) as the precipitant. Three-dimensional X-ray diffraction data were collected up to 3 A resolution from one crystal at room temperature. The structure was solved by the molecular replacement methodmore » using the coordinates of Erwinia chrysanthemi L-asparaginase as the starting model. The coordinates refined with the use of the CNS program package were deposited in the Protein Data Bank (PDB code 1ZCF)« less

  15. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    PubMed

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  16. Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm

    NASA Astrophysics Data System (ADS)

    Küchlin, Stephan; Jenny, Patrick

    2018-06-01

    Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.

  17. A Study of Upgraded Phenolic Curing for RSRM Nozzle Rings

    NASA Technical Reports Server (NTRS)

    Smartt, Ziba

    2000-01-01

    A thermochemical cure model for predicting temperature and degree of cure profiles in curing phenolic parts was developed, validated and refined over several years. The model supports optimization of cure cycles and allows input of properties based upon the types of material and the process by which these materials are used to make nozzle components. The model has been refined to use sophisticated computer graphics to demonstrate the changes in temperature and degree of cure during the curing process. The effort discussed in the paper will be the conversion from an outdated solid modeling input program and SINDA analysis code to an integrated solid modeling and analysis package (I-DEAS solid model and TMG). Also discussed will be the incorporation of updated material properties obtained during full scale curing tests into the cure models and the results for all the Reusable Solid Rocket Motor (RSRM) nozzle rings.

  18. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  19. Microcomputer pollution model for civilian airports and Air Force bases. Model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segal, H.M.; Hamilton, P.L.

    1988-08-01

    This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less

  20. Statistical Methodologies to Integrate Experimental and Computational Research

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  1. Value-Based Requirements Traceability: Lessons Learned

    NASA Astrophysics Data System (ADS)

    Egyed, Alexander; Grünbacher, Paul; Heindl, Matthias; Biffl, Stefan

    Traceability from requirements to code is mandated by numerous software development standards. These standards, however, are not explicit about the appropriate level of quality of trace links. From a technical perspective, trace quality should meet the needs of the intended trace utilizations. Unfortunately, long-term trace utilizations are typically unknown at the time of trace acquisition which represents a dilemma for many companies. This chapter suggests ways to balance the cost and benefits of requirements traceability. We present data from three case studies demonstrating that trace acquisition requires broad coverage but can tolerate imprecision. With this trade-off our lessons learned suggest a traceability strategy that (1) provides trace links more quickly, (2) refines trace links according to user-defined value considerations, and (3) supports the later refinement of trace links in case the initial value consideration has changed over time. The scope of our work considers the entire life cycle of traceability instead of just the creation of trace links.

  2. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.

  3. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  4. Numerical study of 3D flow structure near a cylinder piercing turbulent free-convection boundary layer on a vertical plate

    NASA Astrophysics Data System (ADS)

    Levchenya, A. M.; Smirnov, E. M.; Zhukovskaya, V. D.

    2018-05-01

    The present contribution covers RANS-based simulation of 3D flow near a cylinder introduced into turbulent vertical-plate free-convection boundary layer. Numerical solutions were obtained with a finite-volume Navier-Stokes code of second-order accuracy using refined grids. Peculiarities of the flow disturbed by the obstacle are analyzed. Cylinder-diameter effect on the horseshoe vortex size and its position is evaluated.

  5. Development of a novel coding scheme (SABICS) to record nurse-child interactive behaviours in a community dental preventive intervention.

    PubMed

    Zhou, Yuefang; Cameron, Elaine; Forbes, Gillian; Humphris, Gerry

    2012-08-01

    To develop and validate the St Andrews Behavioural Interaction Coding Scheme (SABICS): a tool to record nurse-child interactive behaviours. The SABICS was developed primarily from observation of video recorded interactions; and refined through an iterative process of applying the scheme to new data sets. Its practical applicability was assessed via implementation of the scheme on specialised behavioural coding software. Reliability was calculated using Cohen's Kappa. Discriminant validity was assessed using logistic regression. The SABICS contains 48 codes. Fifty-five nurse-child interactions were successfully coded through administering the scheme on The Observer XT8.0 system. Two visualization results of interaction patterns demonstrated the scheme's capability of capturing complex interaction processes. Cohen's Kappa was 0.66 (inter-coder) and 0.88 and 0.78 (two intra-coders). The frequency of nurse behaviours, such as "instruction" (OR = 1.32, p = 0.027) and "praise" (OR = 2.04, p = 0.027), predicted a child receiving the intervention. The SABICS is a unique system to record interactions between dental nurses and 3-5 years old children. It records and displays complex nurse-child interactive behaviours. It is easily administered and demonstrates reasonable psychometric properties. The SABICS has potential for other paediatric settings. Its development procedure may be helpful for other similar coding scheme development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  7. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE PAGES

    Vay, J. -L.; Almgren, A.; Bell, J.; ...

    2018-01-31

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  8. Shell stability analysis in a computer aided engineering (CAE) environment

    NASA Technical Reports Server (NTRS)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  9. Evaluation of a color-coded Landsat 5/6 ratio image for mapping lithologic differences in western South Dakota

    USGS Publications Warehouse

    Raines, Gary L.; Bretz, R.F.; Shurr, George W.

    1979-01-01

    From analysis of a color-coded Landsat 5/6 ratio, image, a map of the vegetation density distribution has been produced by Raines of 25,000 sq km of western South Dakota. This 5/6 ratio image is produced digitally calculating the ratios of the bands 5 and 6 of the Landsat data and then color coding these ratios in an image. Bretz and Shurr compared this vegetation density map with published and unpublished data primarily of the U.S. Geological Survey and the South Dakota Geological Survey; good correspondence is seen between this map and existing geologic maps, especially with the soils map. We believe that this Landsat ratio image can be used as a tool to refine existing maps of surficial geology and bedrock, where bedrock is exposed, and to improve mapping accuracy in areas of poor exposure common in South Dakota. In addition, this type of image could be a useful, additional tool in mapping areas that are unmapped.

  10. Insight from Fukushima Daiichi Unit 3 Investigations using MELCOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.; Francis, Matthew W.; Ott, Larry J.

    During the emergency response period of the accidents that took place at Fukushima Daiichi in March of 2011, researchers at Oak Ridge National Laboratory (ORNL) conducted a number of studies using the MELCOR code to help understand what was occurring and what had occurred. During the post-accident period, the Department of Energy (DOE) and the US Nuclear Regulatory Commission (NRC) jointly sponsored a study of the Fukushima Daiichi accident with collaboration among Oak Ridge, Sandia, and Idaho national laboratories. The purpose of the study was to compile relevant data, reconstruct the accident progression using computer codes, assess the codes predictivemore » capabilities, and identify future data needs. The current paper summarizes some of the early MELCOR simulations and analyses conducted at ORNL of the Fukushima Daiichi Unit 3 accident. Extended analysis and discussion of the Unit 3 accident is also presented taking into account new knowledge and modeling refinements made since the joint DOE/NRC study.« less

  11. Stimulus background influences phase invariant coding by correlated neural activity

    PubMed Central

    Metzen, Michael G; Chacron, Maurice J

    2017-01-01

    Previously we reported that correlations between the activities of peripheral afferents mediate a phase invariant representation of natural communication stimuli that is refined across successive processing stages thereby leading to perception and behavior in the weakly electric fish Apteronotus leptorhynchus (Metzen et al., 2016). Here, we explore how phase invariant coding and perception of natural communication stimuli are affected by changes in the sinusoidal background over which they occur. We found that increasing background frequency led to phase locking, which decreased both detectability and phase invariant coding. Correlated afferent activity was a much better predictor of behavior as assessed from both invariance and detectability than single neuron activity. Thus, our results provide not only further evidence that correlated activity likely determines perception of natural communication signals, but also a novel explanation as to why these preferentially occur on top of low frequency as well as low-intensity sinusoidal backgrounds. DOI: http://dx.doi.org/10.7554/eLife.24482.001 PMID:28315519

  12. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  13. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J. -L.; Almgren, A.; Bell, J.

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  14. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performedmore » separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.« less

  15. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  16. Euler technology assessment for preliminary aircraft design employing OVERFLOW code with multiblock structured-grid method

    NASA Technical Reports Server (NTRS)

    Treiber, David A.; Muilenburg, Dennis A.

    1995-01-01

    The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.

  17. Cove benchmark calculations using SAGUARO and FEMTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Martinez, M.J.

    1986-10-01

    Three small-scale, time-dependent, benchmarking calculations have been made using the finite element codes SAGUARO, to determine hydraulic head and water velocity profiles, and FEMTRAN, to predict the solute transport. Sand and hard rock porous materials were used. Time scales for the problems, which ranged from tens of hours to thousands of years, have posed no particular diffculty for the two codes. Studies have been performed to determine the effects of computational mesh, boundary conditions, velocity formulation and SAGUARO/FEMTRAN code-coupling on water and solute transport. Results showed that mesh refinement improved mass conservation. Varying the drain-tile size in COVE 1N hadmore » a weak effect on the rate at which the tile field drained. Excellent agreement with published COVE 1N data was obtained for the hydrological field and reasonable agreement for the solute-concentration predictions. The question remains whether these types of calculations can be carried out on repository-scale problems using material characteristic curves representing tuff with fractures.« less

  18. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    NASA Astrophysics Data System (ADS)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  19. Antenna pattern control using impedance surfaces

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Liu, Kefeng

    1992-01-01

    During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.

  20. Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo

    2015-01-01

    Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.

  1. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  2. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  3. Application of FUN3D and CFL3D to the Third Workshop on CFD Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Thomas, J. L.

    2008-01-01

    Two Reynolds-averaged Navier-Stokes computer codes - one unstructured and one structured - are applied to two workshop cases (for the 3rd Workshop on CFD Uncertainty Analysis, held at Instituto Superior Tecnico, Lisbon, in October 2008) for the purpose of uncertainty analysis. The Spalart-Allmaras turbulence model is employed. The first case uses the method of manufactured solution and is intended as a verification case. In other words, the CFD solution is expected to approach the exact solution as the grid is refined. The second case is a validation case (comparison against experiment), for which modeling errors inherent in the turbulence model and errors/uncertainty in the experiment may prevent close agreement. The results from the two computer codes are also compared. This exercise verifies that the codes are consistent both with the exact manufactured solution and with each other. In terms of order property, both codes behave as expected for the manufactured solution. For the backward facing step, CFD uncertainty on the finest grid is computed and is generally very low for both codes (whose results are nearly identical). Agreement with experiment is good at some locations for particular variables, but there are also many areas where the CFD and experimental uncertainties do not overlap.

  4. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less

  5. Master standard data quantity food production code. Macro elements for synthesizing production labor time.

    PubMed

    Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C

    1978-06-01

    Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.

  6. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.

  7. Developing a method for specifying the components of behavior change interventions in practice: the example of smoking cessation.

    PubMed

    Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan

    2013-06-01

    There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.

  8. Improvements to the construction of binary black hole initial data

    NASA Astrophysics Data System (ADS)

    Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald P.; Boyle, Michael; Szilágyi, Béla

    2015-12-01

    Construction of binary black hole initial data is a prerequisite for numerical evolutions of binary black holes. This paper reports improvements to the binary black hole initial data solver in the spectral Einstein code, to allow robust construction of initial data for mass-ratio above 10:1, and for dimensionless black hole spins above 0.9, while improving efficiency for lower mass-ratios and spins. We implement a more flexible domain decomposition, adaptive mesh refinement and an updated method for choosing free parameters. We also introduce a new method to control and eliminate residual linear momentum in initial data for precessing systems, and demonstrate that it eliminates gravitational mode mixing during the evolution. Finally, the new code is applied to construct initial data for hyperbolic scattering and for binaries with very small separation.

  9. Numerical Predictions of Mode Reflections in an Open Circular Duct: Comparison with Theory

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray

    2015-01-01

    The NASA Broadband Aeroacoustic Stator Simulation code was used to compute the acoustic field for higher-order modes in a circular duct geometry. To test the accuracy of the results computed by the code, the duct was terminated by an open end with an infinite flange or no flange. Both open end conditions have a theoretical solution that was used to compare with the computed results. Excellent comparison for reflection matrix values was achieved after suitable refinement of the grid at the open end. The study also revealed issues with the level of the mode amplitude introduced into the acoustic held from the source boundary and the amount of reflection that occurred at the source boundary when a general nonreflecting boundary condition was applied.

  10. Surface tension models for a multi-material ALE code with AMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wangyi; Koniges, Alice; Gott, Kevin

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  11. What's New in GSAS-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toby, Brian H.; Von Dreele, Robert B.

    The General Structure and Analysis Software II (GSAS-II) package is an all-new crystallographic analysis package written to replace and extend the capabilities of the universal and widely used GSAS and EXPGUI packages. GSAS-II was described in a 2013 article, but considerable work has been completed since then. This paper describes the advances, which include: rigid body fitting and structure solution modules; improved treatment for parametric refinements and equation of state fitting; and small-angle scattering data reduction and analysis. GSAS-II offers versatile and extensible modules for import and export of data and results. Capabilities are provided for users to select anymore » version of the code. Code documentation has reached 150 pages and 17 web-tutorials are offered. © 2014 International Centre for Diffraction Data.« less

  12. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  13. Surface tension models for a multi-material ALE code with AMR

    DOE PAGES

    Liu, Wangyi; Koniges, Alice; Gott, Kevin; ...

    2017-06-01

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  14. Global transcriptome analysis reveals extensive gene remodeling, alternative splicing and differential transcription profiles in non-seed vascular plant Selaginella moellendorffii.

    PubMed

    Zhu, Yan; Chen, Longxian; Zhang, Chengjun; Hao, Pei; Jing, Xinyun; Li, Xuan

    2017-01-25

    Selaginella moellendorffii, a lycophyte, is a model plant to study the early evolution and development of vascular plants. As the first and only sequenced lycophyte to date, the genome of S. moellendorffii revealed many conserved genes and pathways, as well as specialized genes different from flowering plants. Despite the progress made, little is known about long noncoding RNAs (lncRNA) and the alternative splicing (AS) of coding genes in S. moellendorffii. Its coding gene models have not been fully validated with transcriptome data. Furthermore, it remains important to understand whether the regulatory mechanisms similar to flowering plants are used, and how they operate in a non-seed primitive vascular plant. RNA-sequencing (RNA-seq) was performed for three S. moellendorffii tissues, root, stem, and leaf, by constructing strand-specific RNA-seq libraries from RNA purified using RiboMinus isolation protocol. A total of 176 million reads (44 Gbp) were obtained from three tissue types, and were mapped to S. moellendorffii genome. By comparing with 22,285 existing gene models of S. moellendorffii, we identified 7930 high-confidence novel coding genes (a 35.6% increase), and for the first time reported 4422 lncRNAs in a lycophyte. Further, we refined 2461 (11.0%) of existing gene models, and identified 11,030 AS events (for 5957 coding genes) revealed for the first time for lycophytes. Tissue-specific gene expression with functional implication was analyzed, and 1031, 554, and 269 coding genes, and 174, 39, and 17 lncRNAs were identified in root, stem, and leaf tissues, respectively. The expression of critical genes for vascular development stages, i.e. formation of provascular cells, xylem specification and differentiation, and phloem specification and differentiation, was compared in S. moellendorffii tissues, indicating a less complex regulatory mechanism in lycophytes than in flowering plants. The results were further strengthened by the evolutionary trend of seven transcription factor families related to vascular development, which was observed among four representative species of seed and non-seed vascular plants, and nonvascular land and aquatic plants. The deep RNA-seq study of S. moellendorffii discovered extensive new gene contents, including novel coding genes, lncRNAs, AS events, and refined gene models. Compared to flowering vascular plants, S. moellendorffii displayed a less complexity in both gene structure, alternative splicing, and regulatory elements of vascular development. The study offered important insight into the evolution of vascular plants, and the regulation mechanism of vascular development in a non-seed plant.

  15. Poly(A) code analyses reveal key determinants for tissue-specific mRNA alternative polyadenylation

    PubMed Central

    Weng, Lingjie; Li, Yi; Xie, Xiaohui; Shi, Yongsheng

    2016-01-01

    mRNA alternative polyadenylation (APA) is a critical mechanism for post-transcriptional gene regulation and is often regulated in a tissue- and/or developmental stage-specific manner. An ultimate goal for the APA field has been to be able to computationally predict APA profiles under different physiological or pathological conditions. As a first step toward this goal, we have assembled a poly(A) code for predicting tissue-specific poly(A) sites (PASs). Based on a compendium of over 600 features that have known or potential roles in PAS selection, we have generated and refined a machine-learning algorithm using multiple high-throughput sequencing-based data sets of tissue-specific and constitutive PASs. This code can predict tissue-specific PASs with >85% accuracy. Importantly, by analyzing the prediction performance based on different RNA features, we found that PAS context, including the distance between alternative PASs and the relative position of a PAS within the gene, is a key feature for determining the susceptibility of a PAS to tissue-specific regulation. Our poly(A) code provides a useful tool for not only predicting tissue-specific APA regulation, but also for studying its underlying molecular mechanisms. PMID:27095026

  16. A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Evaluation of fusion-evaporation cross-section calculations

    NASA Astrophysics Data System (ADS)

    Blank, B.; Canchel, G.; Seis, F.; Delahaye, P.

    2018-02-01

    Calculated fusion-evaporation cross sections from five different codes are compared to experimental data. The present comparison extents over a large range of nuclei and isotopic chains to investigate the evolution of experimental and calculated cross sections. All models more or less overestimate the experimental cross sections. We found reasonable agreement by using the geometrical average of the five model calculations and dividing the average by a factor of 11.2. More refined analyses are made for example for the 100Sn region.

  18. Fatty Acid Synthesis Gene Variants and Breast Cancer Risk: A Study within the European Prospective Investigation into Cancer and Nutrition

    DTIC Science & Technology

    2005-02-01

    refined carbohydrates, is associated with high incidence of breast cancer in women. Excess energy intake causes elevated blood levels of glucose and...confer increased breast cancer susceptibility. In a series of 46 breast cancer cases, we are systematically searching the coding and regulatory regions of...of excess energy in the form of triglycerides , produced either from the diet fatty acids or from those synthesized de novo. Excess energy intake and

  19. Percept User Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carnes, Brian; Kennon, Stephen Ray

    2017-05-01

    This document is the main user guide for the Sierra/Percept capabilities including the mesh_adapt and mesh_transfer tools. Basic capabilities for uniform mesh refinement (UMR) and mesh transfers are discussed. Examples are used to provide illustration. Future versions of this manual will include more advanced features such as geometry and mesh smoothing. Additionally, all the options for the mesh_adapt code will be described in detail. Capabilities for local adaptivity in the context of offline adaptivity will also be included. This page intentionally left blank.

  20. Compiling standardized information from clinical practice: using content analysis and ICF Linking Rules in a goal-oriented youth rehabilitation program.

    PubMed

    Lustenberger, Nadia A; Prodinger, Birgit; Dorjbal, Delgerjargal; Rubinelli, Sara; Schmitt, Klaus; Scheel-Sailer, Anke

    2017-09-23

    To illustrate how routinely written narrative admission and discharge reports of a rehabilitation program for eight youths with chronic neurological health conditions can be transformed to the International Classification of Functioning, Disability and Health. First, a qualitative content analysis was conducted by building meaningful units with text segments assigned of the reports to the five elements of the Rehab-Cycle ® : goal; assessment; assignment; intervention; evaluation. Second, the meaningful units were then linked to the ICF using the refined ICF Linking Rules. With the first step of transformation, the emphasis of the narrative reports changed to a process oriented interdisciplinary layout, revealing three thematic blocks of goals: mobility, self-care, mental, and social functions. The linked 95 unique ICF codes could be grouped in clinically meaningful goal-centered ICF codes. Between the two independent linkers, the agreement rate was improved after complementing the rules with additional agreements. The ICF Linking Rules can be used to compile standardized health information from narrative reports if prior structured. The process requires time and expertise. To implement the ICF into common practice, the findings provide the starting point for reporting rehabilitation that builds upon existing practice and adheres to international standards. Implications for Rehabilitation This study provides evidence that routinely collected health information from rehabilitation practice can be transformed to the International Classification of Functioning, Disability and Health by using the "ICF Linking Rules", however, this requires time and expertise. The Rehab-Cycle ® , including assessments, assignments, goal setting, interventions and goal evaluation, serves as feasible framework for structuring this rehabilitation program and ensures that the complexity of local practice is appropriately reflected. The refined "ICF Linking Rules" lead to a standardized transformation process of narrative text and thus a higher quality with increased transparency. As a next step, the resulting format of goal codes supplemented by goal-clarifying codes could be validated to strengthen the implementation of the International Classification of Functioning, Disability and Health into rehabilitation routine by respecting the variety of clinical practice.

  1. Editorial: The publication of geoscientific model developments v1.1

    NASA Astrophysics Data System (ADS)

    Executive Editors, GMD

    2015-10-01

    Version 1.0 of the editorial of the EGU (European Geosciences Union) journal, Geoscientific Model Development (GMD), was published in 2013. In that editorial an assessment was made of the progress the journal had made since it started, and some revisions to the editorial policy were introduced. After 2 years of experience with this revised editorial policy there are a few required updates, refinements and clarifications, so here we present version 1.1 of the editorial. The most significant amendments relate to the peer-review criteria as presented in the Framework for GMD manuscript types, which is published as an appendix to this paper and also available on the GMD manuscript types webpage. We also slightly refine and update the Publication guide and introduce a self-contained code and data policy. The changes are summarised as follows: - All manuscript types are now required to include code or data availability paragraphs, and model code must always be made available (in the case of copyright or other legal issues, to the editor at a minimum). - The role of evaluation in GMD papers is clarified, and a separate evaluation paper type is introduced. Model descriptions must already be published or in peer review when separate evaluation papers are submitted. - Observationally derived data should normally be published in a data journal rather than in GMD. Syntheses of data which were specifically designed for tasks such as model boundary conditions or direct evaluation of model output may, however, be published in GMD. - GMD publishes a broad range of different kinds of models, and this fact is now more explicitly acknowledged. - The main changes to the Publication guide are the addition of guidelines for editors when assessing papers at the initial review stage. Before sending papers for peer review, editors are required to make sure that papers comply with the Framework for GMD paper types and to carefully consider the topic of plagiarism. - A new appendix, the GMD code and data policy, is included. Version 1.1 of the manuscript types and Publication guide are included in the appendices with changed sentences marked in bold font.

  2. Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer

    NASA Astrophysics Data System (ADS)

    Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri

    2017-02-01

    Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self-consistently calculate general-relativistic accretion flows onto compact objects. In addition to the accurate handling of the matter, we provide a self-consistent electromagnetic emission from these scenarios by solving the associated radiative-transfer problem. While magnetic fields are currently excluded from our analysis, the tools presented here can have a number of applications to study accretion flows onto black holes or neutron stars.

  3. Refined liquid smoke: a potential antilisterial additive to cold-smoked sockeye salmon (Oncorhynchus nerka).

    PubMed

    Montazeri, Naim; Himelbloom, Brian H; Oliveira, Alexandra C M; Leigh, Mary Beth; Crapo, Charles A

    2013-05-01

    Cold-smoked salmon (CSS) is a potentially hazardous ready-to-eat food product due to the high risk of contamination with Listeria monocytogenes and lack of a listericidal step. We investigated the antilisterial property of liquid smokes (LS) against Listeria innocua ATCC 33090 (surrogate to L. monocytogenes) as a potential supplement to vacuum-packaged CSS. A full-strength LS (Code 10-Poly), and three commercially refined fractions (AM-3, AM-10, and 1291) having less color and flavor (lower content of phenols and carbonyl-containing compounds) were tested. In vitro assays showed strong inhibition for all LS except for 1291. The CSS strips were surface coated with AM-3 and AM-10 at 1% LS (vol/wt) with an L-shaped glass rod and then inoculated with L. innocua at 3.5 log CFU/g, vacuum packaged, and stored at 4°C. The LS did not completely eliminate L. innocua but provided a 2-log reduction by day 14, with no growth up to 35 days of refrigerated storage. A simple difference sensory test by 180 untrained panelists showed the application of AM-3 did not significantly influence the overall sensorial quality of CSS. In essence, the application of the refined LS as an antilisterial additive to CSS is recommended.

  4. Statistical radii associated with amino acids to determine the contact map: fixing the structure of a type I cohesin domain in the Clostridium thermocellum cellulosome

    NASA Astrophysics Data System (ADS)

    Chwastyk, Mateusz; Poma Bernaola, Adolfo; Cieplak, Marek

    2015-07-01

    We propose to improve and simplify protein refinement procedures through consideration of which pairs of amino acid residues should form native contacts. We first consider 11 330 proteins from the CATH database to determine statistical distributions of contacts associated with a given type of amino acid. The distributions are set across the distances between the α-C atoms that are in contact. Based on this data, we determine typical radii of effective spheres that can be placed on the α-C atoms in order to reconstruct the distribution of the contact lengths. This is done by checking for overlaps with enlarged van der Waals spheres associated with heavy atoms on other amino acids. The resulting contacts can be used to identify non-native contacts that may arise during the time evolution of structure-based models. Here, the radii are used to guide reconstruction of nine missing side chains in a type I cohesin domain with the Protein Data Bank code 1AOH. We first identify the likely missing contacts and then sculpt the corresponding side chains by standard refinement tools to achieve consistency with the expected contact map. One ambiguity in refinement is resolved by determining all-atom conformational energies.

  5. 3D Numerical Prediction of Gas-Solid Flow Behavior in CFB Risers for Geldart A and B Particles

    NASA Astrophysics Data System (ADS)

    Özel, A.; Fede, P.; Simonin, O.

    In this study, mono-disperse flows in squared risers conducted with A and B-type particles were simulated by Eulerian n-fluid 3D unsteady code. Two transport equations developed in the frame of kinetic theory of granular media supplemented by the interstitial fluid effect and the interaction with the turbulence (Balzer et al., 1996) are resolved to model the effect of velocity fluctuations and inter-particle collisions on the dispersed phase hydrodynamic. The studied flow geometries are three-dimensional vertical cold channels excluding cyclone, tampon and returning pipe of a typical circulating fluidized bed. For both type of particles, parametric studies were carried out to determine influences of boundary conditions, physical parameters and turbulence modeling. The grid dependency was analyzed with mesh refinement in horizontal and axial directions. For B-type particles, the results are in good qualitative agreement with the experiments and numerical predictions are slightly improved by the mesh refinement. On the contrary, the simulations with A-type particles show a less satisfactory agreement with available measurements and are highly sensitive to mesh refinement. Further studies are carried out to improve the A-type particles by modeling subgrid-scale effects in the frame of large-eddy simulation approach.

  6. MODPATH-LGR; documentation of a computer program for particle tracking in shared-node locally refined grids by using MODFLOW-LGR

    USGS Publications Warehouse

    Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.

  7. RICO: A NEW APPROACH FOR FAST AND ACCURATE REPRESENTATION OF THE COSMOLOGICAL RECOMBINATION HISTORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendt, W. A.; Wandelt, B. D.; Chluba, J.

    2009-04-15

    We present RICO, a code designed to compute the ionization fraction of the universe during the epoch of hydrogen and helium recombination with an unprecedented combination of speed and accuracy. This is accomplished by training the machine learning code PICO on the calculations of a multilevel cosmological recombination code which self-consistently includes several physical processes that were neglected previously. After training, RICO is used to fit the free electron fraction as a function of the cosmological parameters. While, for example, at low redshifts (z {approx}< 900), much of the net change in the ionization fraction can be captured by loweringmore » the hydrogen fudge factor in RECFAST by about 3%, RICO provides a means of effectively using the accurate ionization history of the full recombination code in the standard cosmological parameter estimation framework without the need to add new or refined fudge factors or functions to a simple recombination model. Within the new approach presented here, it is easy to update RICO whenever a more accurate full recombination code becomes available. Once trained, RICO computes the cosmological ionization history with negligible fitting error in {approx}10 ms, a speedup of at least 10{sup 6} over the full recombination code that was used here. Also RICO is able to reproduce the ionization history of the full code to a level well below 0.1%, thereby ensuring that the theoretical power spectra of cosmic microwave background (CMB) fluctuations can be computed to sufficient accuracy and speed for analysis from upcoming CMB experiments like Planck. Furthermore, it will enable cross-checking different recombination codes across cosmological parameter space, a comparison that will be very important in order to assure the accurate interpretation of future CMB data.« less

  8. A cell-vertex multigrid method for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Radespiel, R.

    1989-01-01

    A cell-vertex scheme for the Navier-Stokes equations, which is based on central difference approximations and Runge-Kutta time stepping, is described. Using local time stepping, implicit residual smoothing, a multigrid method, and carefully controlled artificial dissipative terms, very good convergence rates are obtained for a wide range of two- and three-dimensional flows over airfoils and wings. The accuracy of the code is examined by grid refinement studies and comparison with experimental data. For an accurate prediction of turbulent flows with strong separations, a modified version of the nonequilibrium turbulence model of Johnson and King is introduced, which is well suited for an implementation into three-dimensional Navier-Stokes codes. It is shown that the solutions for three-dimensional flows with strong separations can be dramatically improved, when a nonequilibrium model of turbulence is used.

  9. 3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras

    PubMed Central

    Aguilera, Teodoro

    2017-01-01

    This work proposes the use of a hybrid acoustic and optical indoor positioning system for the accurate 3D positioning of Unmanned Aerial Vehicles (UAVs). The acoustic module of this system is based on a Time-Code Division Multiple Access (T-CDMA) scheme, where the sequential emission of five spread spectrum ultrasonic codes is performed to compute the horizontal vehicle position following a 2D multilateration procedure. The optical module is based on a Time-Of-Flight (TOF) camera that provides an initial estimation for the vehicle height. A recursive algorithm programmed on an external computer is then proposed to refine the estimated position. Experimental results show that the proposed system can increase the accuracy of a solely acoustic system by 70–80% in terms of positioning mean square error. PMID:29301211

  10. Verification of Advective Bar Elements Implemented in the Aria Thermal Response Code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Brantley

    2016-01-01

    A verification effort was undertaken to evaluate the implementation of the new advective bar capability in the Aria thermal response code. Several approaches to the verification process were taken : a mesh refinement study to demonstrate solution convergence in the fluid and the solid, visually examining the mapping of the advective bar element nodes to the surrounding surfaces, and a comparison of solutions produced using the advective bars for simple geometries with solutions from commercial CFD software . The mesh refinement study has shown solution convergence for simple pipe flow in both temperature and velocity . Guidelines were provided tomore » achieve appropriate meshes between the advective bar elements and the surrounding volume. Simulations of pipe flow using advective bars elements in Aria have been compared to simulations using the commercial CFD software ANSYS Fluent (r) and provided comparable solutions in temperature and velocity supporting proper implementation of the new capability. Verification of Advective Bar Elements iv Acknowledgements A special thanks goes to Dean Dobranich for his guidance and expertise through all stages of this effort . His advice and feedback was instrumental to its completion. Thanks also goes to Sam Subia and Tolu Okusanya for helping to plan many of the verification activities performed in this document. Thank you to Sam, Justin Lamb and Victor Brunini for their assistance in resolving issues encountered with running the advective bar element model. Finally, thanks goes to Dean, Sam, and Adam Hetzler for reviewing the document and providing very valuable comments.« less

  11. Subsumption principles underlying medical concept systems and their formal reconstruction.

    PubMed Central

    Bernauer, J.

    1994-01-01

    Conventional medical concept systems represent generic concept relations by hierarchical coding principles. Often, these coding principles constrain the concept system and reduce the potential for automatical derivation of subsumption. Formal reconstruction of medical concept systems is an approach that bases on the conceptual representation of meanings and that allows for the application of formal criteria for subsumption. Those criteria must reflect intuitive principles of subordination which are underlying conventional medical concept systems. Particularly these are: The subordinate concept results (1) from adding a specializing criterion to the superordinate concept, (2) from refining the primary category, or a criterion of the superordinate concept, by a concept that is less general, (3) from adding a partitive criterion to a criterion of the superordinate, (4) from refining a criterion by a concept that is less comprehensive, and finally (5) from coordinating the superordinate concept, or one of its criteria. This paper introduces a formalism called BERNWARD that aims at the formal reconstruction of medical concept systems according to these intuitive principles. The automatical derivation of hierarchical relations is primarily supported by explicit generic and explicit partititive hierarchies of concepts, secondly, by two formal criteria that base on the structure of concept descriptions and explicit hierarchical relations between their elements, namely: formal subsumption and part-sensitive subsumption. Formal subsumption takes only generic relations into account, part-sensitive subsumption additionally regards partive relations between criteria. This approach seems to be flexible enough to cope with unforeseeable effects of partitive criteria on subsumption. PMID:7949907

  12. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  13. Gender, Cultural Influences, and Coping with Musculoskeletal Pain at Work: The Experience of Malaysian Female Office Workers.

    PubMed

    Maakip, Ismail; Oakman, Jodi; Stuckey, Rwth

    2017-06-01

    Purpose Workers with musculoskeletal pain (MSP) often continue to work despite their condition. Understanding the factors that enable them to remain at work provides insights into the development of appropriate workplace accommodations. This qualitative study aims to explore the strategies utilised by female Malaysian office workers with MSP to maintain productive employment. Methods A qualitative approach using thematic analysis was used. Individual semi-structured interviews were conducted with 13 female Malaysian office workers with MSP. Initial codes were identified and refined through iterative discussion to further develop the emerging codes and modify the coding framework. A further stage of coding was undertaken to eliminate redundant codes and establish analytic connections between distinct themes. Results Two major themes were identified: managing the demands of work and maintaining employment with persistent musculoskeletal pain. Participants reported developing strategies to assist them to remain at work, but most focused on individually initiated adaptations or peer support, rather than systemic changes to work systems or practices. A combination of the patriarchal and hierarchical cultural occupational context emerged as a critical factor in the finding of individual or peer based adaptations rather than organizational accommodations. Conclusions It is recommended that supervisors be educated in the benefits of maintaining and retaining employees with MSP, and encouraged to challenge cultural norms and develop appropriate flexible workplace accommodations through consultation and negotiation with these workers.

  14. Methodological considerations for observational coding of eating and feeding behaviors in children and their families.

    PubMed

    Pesch, Megan H; Lumeng, Julie C

    2017-12-15

    Behavioral coding of videotaped eating and feeding interactions can provide researchers with rich observational data and unique insights into eating behaviors, food intake, food selection as well as interpersonal and mealtime dynamics of children and their families. Unlike self-report measures of eating and feeding practices, the coding of videotaped eating and feeding behaviors can allow for the quantitative and qualitative examinations of behaviors and practices that participants may not self-report. While this methodology is increasingly more common, behavioral coding protocols and methodology are not widely shared in the literature. This has important implications for validity and reliability of coding schemes across settings. Additional guidance on how to design, implement, code and analyze videotaped eating and feeding behaviors could contribute to advancing the science of behavioral nutrition. The objectives of this narrative review are to review methodology for the design, operationalization, and coding of videotaped behavioral eating and feeding data in children and their families, and to highlight best practices. When capturing eating and feeding behaviors through analysis of videotapes, it is important for the study and coding to be hypothesis driven. Study design considerations include how to best capture the target behaviors through selection of a controlled experimental laboratory environment versus home mealtime, duration of video recording, number of observations to achieve reliability across eating episodes, as well as technical issues in video recording and sound quality. Study design must also take into account plans for coding the target behaviors, which may include behavior frequency, duration, categorization or qualitative descriptors. Coding scheme creation and refinement occur through an iterative process. Reliability between coders can be challenging to achieve but is paramount to the scientific rigor of the methodology. Analysis approach is dependent on the how data were coded and collapsed. Behavioral coding of videotaped eating and feeding behaviors can capture rich data "in-vivo" that is otherwise unobtainable from self-report measures. While data collection and coding are time-intensive the data yielded can be extremely valuable. Additional sharing of methodology and coding schemes around eating and feeding behaviors could advance the science and field.

  15. Simulations of Rayleigh Taylor Instabilities in the presence of a Strong Radiative shock

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Shvarts, Dov; Drake, R. P.

    2016-10-01

    Recent Supernova Rayleigh Taylor experiments on the National Ignition Facility (NIF) are relevant to the evolution of core-collapse supernovae in which red supergiant stars explode. Here we report simulations of these experiments using the CRASH code. The CRASH code, developed at the University of Michigan to design and analyze high-energy-density experiments, is an Eulerian code with block-adaptive mesh refinement, multigroup diffusive radiation transport, and electron heat conduction. We explore two cases, one in which the shock is strongly radiative, and another with negligible radiation. The experiments in all cases produced structures at embedded interfaces by the Rayleigh Taylor instability. The weaker shocked environment is cooler and the instability grows classically. The strongly radiative shock produces a warm environment near the instability, ablates the interface, and alters the growth. We compare the simulated results with the experimental data and attempt to explain the differences. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0002956.

  16. Adapting a Clinical Data Repository to ICD-10-CM through the use of a Terminology Repository

    PubMed Central

    Cimino, James J.; Remennick, Lyubov

    2014-01-01

    Clinical data repositories frequently contain patient diagnoses coded with the International Classification of Diseases, Ninth Revision (ICD-9-CM). These repositories now need to accommodate data coded with the Tenth Revision (ICD-10-CM). Database users wish to retrieve relevant data regardless of the system by which they are coded. We demonstrate how a terminology repository (the Research Entities Dictionary or RED) serves as an ontology relating terms of both ICD versions to each other to support seamless version-independent retrieval from the Biomedical Translational Research Information System (BTRIS) at the National Institutes of Health. We make use of the Center for Medicare and Medicaid Services’ General Equivalence Mappings (GEMs) to reduce the modeling effort required to determine whether ICD-10-CM terms should be added to the RED as new concepts or as synonyms of existing concepts. A divide-and-conquer approach is used to develop integration heuristics that offer a satisfactory interim solution and facilitate additional refinement of the integration as time and resources allow. PMID:25954344

  17. Refined lateral energy correction functions for the KASCADE-Grande experiment based on Geant4 simulations

    NASA Astrophysics Data System (ADS)

    Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.

    2015-02-01

    In previous studies of KASCADE-Grande data, a Monte Carlo simulation code based on the GEANT3 program has been developed to describe the energy deposited by EAS particles in the detector stations. In an attempt to decrease the simulation time and ensure compatibility with the geometry description in standard KASCADE-Grande analysis software, several structural elements have been neglected in the implementation of the Grande station geometry. To improve the agreement between experimental and simulated data, a more accurate simulation of the response of the KASCADE-Grande detector is necessary. A new simulation code has been developed based on the GEANT4 program, including a realistic geometry of the detector station with structural elements that have not been considered in previous studies. The new code is used to study the influence of a realistic detector geometry on the energy deposited in the Grande detector stations by particles from EAS events simulated by CORSIKA. Lateral Energy Correction Functions are determined and compared with previous results based on GEANT3.

  18. The Role of Ontologies in Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.

    2004-01-01

    Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.

  19. Spherical combustion clouds in explosions

    NASA Astrophysics Data System (ADS)

    Kuhl, A. L.; Bell, J. B.; Beckner, V. E.; Balakrishnan, K.; Aspden, A. J.

    2013-05-01

    This study explores the properties of spherical combustion clouds in explosions. Two cases are investigated: (1) detonation of a TNT charge and combustion of its detonation products with air, and (2) shock dispersion of aluminum powder and its combustion with air. The evolution of the blast wave and ensuing combustion cloud dynamics are studied via numerical simulations with our adaptive mesh refinement combustion code. The code solves the multi-phase conservation laws for a dilute heterogeneous continuum as formulated by Nigmatulin. Single-phase combustion (e.g., TNT with air) is modeled in the fast-chemistry limit. Two-phase combustion (e.g., Al powder with air) uses an induction time model based on Arrhenius fits to Boiko's shock tube data, along with an ignition temperature criterion based on fits to Gurevich's data, and an ignition probability model that accounts for multi-particle effects on cloud ignition. Equations of state are based on polynomial fits to thermodynamic calculations with the Cheetah code, assuming frozen reactants and equilibrium products. Adaptive mesh refinement is used to resolve thin reaction zones and capture the energy-bearing scales of turbulence on the computational mesh (ILES approach). Taking advantage of the symmetry of the problem, azimuthal averaging was used to extract the mean and rms fluctuations from the numerical solution, including: thermodynamic profiles, kinematic profiles, and reaction-zone profiles across the combustion cloud. Fuel consumption was limited to ˜ 60-70 %, due to the limited amount of air a spherical combustion cloud can entrain before the turbulent velocity field decays away. Turbulent kinetic energy spectra of the solution were found to have both rotational and dilatational components, due to compressibility effects. The dilatational component was typically about 1 % of the rotational component; both seemed to preserve their spectra as they decayed. Kinetic energy of the blast wave decayed due to the pressure field. Turbulent kinetic energy of the combustion cloud decayed due to enstrophy overline{ω 2} and dilatation overline{Δ 2}.

  20. Transforming user needs into functional requirements for an antibiotic clinical decision support system: explicating content analysis for system design.

    PubMed

    Bright, T J

    2013-01-01

    Many informatics studies use content analysis to generate functional requirements for system development. Explication of this translational process from qualitative data to functional requirements can strengthen the understanding and scientific rigor when applying content analysis in informatics studies. To describe a user-centered approach transforming emergent themes derived from focus group data into functional requirements for informatics solutions and to illustrate these methods to the development of an antibiotic clinical decision support system (CDS). THE APPROACH CONSISTED OF FIVE STEPS: 1) identify unmet therapeutic planning information needs via Focus Group Study-I, 2) develop a coding framework of therapeutic planning themes to refine the domain scope to antibiotic therapeutic planning, 3) identify functional requirements of an antibiotic CDS system via Focus Group Study-II, 4) discover informatics solutions and functional requirements from coded data, and 5) determine the types of information needed to support the antibiotic CDS system and link with the identified informatics solutions and functional requirements. The coding framework for Focus Group Study-I revealed unmet therapeutic planning needs. Twelve subthemes emerged and were clustered into four themes; analysis indicated a need for an antibiotic CDS intervention. Focus Group Study-II included five types of information needs. Comments from the Barrier/Challenge to information access and Function/Feature themes produced three informatics solutions and 13 functional requirements of an antibiotic CDS system. Comments from the Patient, Institution, and Domain themes generated required data elements for each informatics solution. This study presents one example explicating content analysis of focus group data and the analysis process to functional requirements from narrative data. Illustration of this 5-step method was used to develop an antibiotic CDS system, resolving unmet antibiotic prescribing needs. As a reusable approach, these techniques can be refined and applied to resolve unmet information needs with informatics interventions in additional domains.

  1. Transforming User Needs into Functional Requirements for an Antibiotic Clinical Decision Support System

    PubMed Central

    Bright, T.J.

    2013-01-01

    Summary Background Many informatics studies use content analysis to generate functional requirements for system development. Explication of this translational process from qualitative data to functional requirements can strengthen the understanding and scientific rigor when applying content analysis in informatics studies. Objective To describe a user-centered approach transforming emergent themes derived from focus group data into functional requirements for informatics solutions and to illustrate these methods to the development of an antibiotic clinical decision support system (CDS). Methods The approach consisted of five steps: 1) identify unmet therapeutic planning information needs via Focus Group Study-I, 2) develop a coding framework of therapeutic planning themes to refine the domain scope to antibiotic therapeutic planning, 3) identify functional requirements of an antibiotic CDS system via Focus Group Study-II, 4) discover informatics solutions and functional requirements from coded data, and 5) determine the types of information needed to support the antibiotic CDS system and link with the identified informatics solutions and functional requirements. Results The coding framework for Focus Group Study-I revealed unmet therapeutic planning needs. Twelve subthemes emerged and were clustered into four themes; analysis indicated a need for an antibiotic CDS intervention. Focus Group Study-II included five types of information needs. Comments from the Barrier/Challenge to information access and Function/Feature themes produced three informatics solutions and 13 functional requirements of an antibiotic CDS system. Comments from the Patient, Institution, and Domain themes generated required data elements for each informatics solution. Conclusion This study presents one example explicating content analysis of focus group data and the analysis process to functional requirements from narrative data. Illustration of this 5-step method was used to develop an antibiotic CDS system, resolving unmet antibiotic prescribing needs. As a reusable approach, these techniques can be refined and applied to resolve unmet information needs with informatics interventions in additional domains. PMID:24454586

  2. Development and validation of an algorithm for identifying urinary retention in a cohort of patients with epilepsy in a large US administrative claims database.

    PubMed

    Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng

    2016-04-01

    The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Asphyxia in the Newborn: Evaluating the Accuracy of ICD Coding, Clinical Diagnosis and Reimbursement: Observational Study at a Swiss Tertiary Care Center on Routinely Collected Health Data from 2012-2015

    PubMed Central

    Rimle, Carole; Zwahlen, Marcel; Triep, Karen; Raio, Luigi; Nelle, Mathias

    2017-01-01

    Background The ICD-10 categories of the diagnosis “perinatal asphyxia” are defined by clinical signs and a 1-minute Apgar score value. However, the modern conception is more complex and considers metabolic values related to the clinical state. A lack of consistency between the former clinical and the latter encoded diagnosis poses questions over the validity of the data. Our aim was to establish a refined classification which is able to distinctly separate cases according to clinical criteria and financial resource consumption. The hypothesis of the study is that outdated ICD-10 definitions result in differences between the encoded diagnosis asphyxia and the medical diagnosis referring to the clinical context. Methods Routinely collected health data (encoding and financial data) of the University Hospital of Bern were used. The study population was chosen by selected ICD codes, the encoded and the clinical diagnosis were analyzed and each case was reevaluated. The new method categorizes the diagnoses of perinatal asphyxia into the following groups: mild, moderate and severe asphyxia, metabolic acidosis and normal clinical findings. The differences of total costs per case were determined by using one-way analysis of variance. Results The study population included 622 cases (P20 “intrauterine hypoxia” 399, P21 “birth asphyxia” 233). By applying the new method, the diagnosis asphyxia could be ruled out with a high probability in 47% of cases and the variance of case related costs (one-way ANOVA: F (5, 616) = 55.84, p < 0.001, multiple R-squared = 0.312, p < 0.001) could be best explained. The classification of the severity of asphyxia could clearly be linked to the complexity of cases. Conclusion The refined coding method provides clearly defined diagnoses groups and has the strongest effect on the distribution of costs. It improves the diagnosis accuracy of perinatal asphyxia concerning clinical practice, research and reimbursement. PMID:28118380

  4. Refined beam measurements on the SNS H- injector

    NASA Astrophysics Data System (ADS)

    Han, B. X.; Welton, R. F.; Murray, S. N.; Pennisi, T. R.; Santana, M.; Stinson, C. M.; Stockli, M. P.

    2017-08-01

    The H- injector for the SNS RFQ accelerator consists of an RF-driven, Cs-enhanced H- ion source and a compact, two-lens electrostatic LEBT. The LEBT output and the RFQ input beam current are measured by deflecting the beam on to an annular plate at the RFQ entrance. Our method and procedure have recently been refined to improve the measurement reliability and accuracy. The new measurements suggest that earlier measurements tended to underestimate the currents by 0-2 mA, but essentially confirm H- beam currents of 50-60 mA being injected into the RFQ. Emittance measurements conducted on a test stand featuring essentially the same H- injector setup show that the normalized rms emittance with 0.5% threshold (99% inclusion of the total beam) is in a range of 0.25-0.4 mm.mrad for a 50-60 mA beam. The RFQ output current is monitored with a BCM toroid. Measurements as well as simulations with the PARMTEQ code indicate an underperforming transmission of the RFQ since around 2012.

  5. Refined composite multivariate generalized multiscale fuzzy entropy: A tool for complexity analysis of multichannel signals

    NASA Astrophysics Data System (ADS)

    Azami, Hamed; Escudero, Javier

    2017-01-01

    Multiscale entropy (MSE) is an appealing tool to characterize the complexity of time series over multiple temporal scales. Recent developments in the field have tried to extend the MSE technique in different ways. Building on these trends, we propose the so-called refined composite multivariate multiscale fuzzy entropy (RCmvMFE) whose coarse-graining step uses variance (RCmvMFEσ2) or mean (RCmvMFEμ). We investigate the behavior of these multivariate methods on multichannel white Gaussian and 1/ f noise signals, and two publicly available biomedical recordings. Our simulations demonstrate that RCmvMFEσ2 and RCmvMFEμ lead to more stable results and are less sensitive to the signals' length in comparison with the other existing multivariate multiscale entropy-based methods. The classification results also show that using both the variance and mean in the coarse-graining step offers complexity profiles with complementary information for biomedical signal analysis. We also made freely available all the Matlab codes used in this paper.

  6. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  7. Thermal and Mechanical Buckling and Postbuckling Responses of Selected Curved Composite Panels

    NASA Technical Reports Server (NTRS)

    Breivik, Nicole L.; Hyer, Michael W.; Starnes, James H., Jr.

    1998-01-01

    The results of an experimental and numerical study of the buckling and postbuckling responses of selected unstiffened curved composite panels subjected to mechanical end shortening and a uniform temperature increase are presented. The uniform temperature increase induces thermal stresses in the panel when the axial displacement is constrained. An apparatus for testing curved panels at elevated temperature is described, numerical results generated by using a geometrically nonlinear finite element analysis code are presented. Several analytical modeling refinements that provide more accurate representation of the actual experimental conditions, and the relative contribution of each refinement, are discussed. Experimental results and numerical predictions are presented and compared for three loading conditions including mechanical end shortening alone, heating the panels to 250 F followed by mechanical end shortening, and heating the panels to 400 F. Changes in the coefficients of thermal expansion were observed as temperature was increased above 330 F. The effects of these changes on the experimental results are discussed for temperatures up to 400 F.

  8. [What is the value of pain therapy in the German refined diagnosis-related-groups system?].

    PubMed

    Meissner, W; Thoma, R; Bauer, M

    2006-03-01

    The German refined diagnosis-related-groups (G-DRG) system was introduced on 1st January 2003, initially on a voluntary basis and on 1st January 2004 the use of a G-DRG costing for stationary hospital treatment became obligatory. The possibility of a description of acute and chronic pain therapy in the G-DRG system was initially rudimentary and not logically planned and also a fair allotment of proceeds according to resources was not possible. By further development of the G-DRG system, pain therapeutic treatment could be improved in some areas, but in others it still remains unsatisfactory. This article offers a summary of the underlying systematics of the G-DRG system and consideration of chronic and current pain therapy in the G-DRG system 2006. In addition to information on currently available possibilities of a pain therapeutical coding in conformation with the G-DRG system, the tasks which are still outstanding will be outlined.

  9. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  10. Simultaneous Semi-Distributed Model Calibration Guided by ...

    EPA Pesticide Factsheets

    Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la

  11. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  12. Side-information-dependent correlation channel estimation in hash-based distributed video coding.

    PubMed

    Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter

    2012-04-01

    In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.

  13. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, P. T.; Dickson, T. L.; Yin, S.

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include themore » NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.« less

  15. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  16. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Engineering software development was automated using an expert system (rule-based) approach. The use of this technology offers benefits not available from current software development and maintenance methodologies. A workstation was built with a library or program data base with methods for browsing the designs stored; a system for graphical specification of designs including a capability for hierarchical refinement and definition in a graphical design system; and an automated code generation capability in FORTRAN. The workstation was then used in a demonstration with examples from an attitude control subsystem design for the space station. Documentation and recommendations are presented.

  17. Solving the transport equation with quadratic finite elements: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, J.M.

    1997-12-31

    At the 4th Joint Conference on Computational Mathematics, the author presented a paper introducing a new quadratic finite element scheme (QFEM) for solving the transport equation. In the ensuing year the author has obtained considerable experience in the application of this method, including solution of eigenvalue problems, transmission problems, and solution of the adjoint form of the equation as well as the usual forward solution. He will present detailed results, and will also discuss other refinements of his transport codes, particularly for 3-dimensional problems on rectilinear and non-rectilinear grids.

  18. Air Vehicle Integration and Technology Research (AVIATR). Task Order 0023: Predictive Capability for Hypersonic Structural Response and Life Prediction: Phase 2 - Detailed Design of Hypersonic Cruise Vehicle Hot-Structure

    DTIC Science & Technology

    2012-02-01

    x Approved for public release; distribution unlimited. I-DEAS/ TMG Thermal analysis software IR Initial Review ITAR International Traffic in Arms...the finite element code I- DEAS/ TMG . A mesh refinement study was conducted on the first panel to determine the mesh density required to accurately...ng neer ng, pera ons ec no ogy oe ng esearc ec no ogy • heat transfer analysis conducted with I-DEAS/ TMG exercises mapping of temperatures to

  19. Simulation of Needle-Type Corona Electrodes by the Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yang, Shiyou; José Márcio, Machado; Nancy Mieko, Abe; Angelo, Passaro

    2007-12-01

    This paper describes a software tool, called LEVSOFT, suitable for the electric field simulations of corona electrodes by the Finite Element Method (FEM). Special attention was paid to the user friendly construction of geometries with corners and sharp points, and to the fast generation of highly refined triangular meshes and field maps. The execution of self-adaptive meshes was also implemented. These customized features make the code attractive for the simulation of needle-type corona electrodes. Some case examples involving needle type electrodes are presented.

  20. Evaluating training of screening, brief intervention, and referral to treatment (SBIRT) for substance use: Reliability of the MD3 SBIRT Coding Scale.

    PubMed

    DiClemente, Carlo C; Crouch, Taylor Berens; Norwood, Amber E Q; Delahanty, Janine; Welsh, Christopher

    2015-03-01

    Screening, brief intervention, and referral to treatment (SBIRT) has become an empirically supported and widely implemented approach in primary and specialty care for addressing substance misuse. Accordingly, training of providers in SBIRT has increased exponentially in recent years. However, the quality and fidelity of training programs and subsequent interventions are largely unknown because of the lack of SBIRT-specific evaluation tools. The purpose of this study was to create a coding scale to assess quality and fidelity of SBIRT interactions addressing alcohol, tobacco, illicit drugs, and prescription medication misuse. The scale was developed to evaluate performance in an SBIRT residency training program. Scale development was based on training protocol and competencies with consultation from Motivational Interviewing coding experts. Trained medical residents practiced SBIRT with standardized patients during 10- to 15-min videotaped interactions. This study included 25 tapes from the Family Medicine program coded by 3 unique coder pairs with varying levels of coding experience. Interrater reliability was assessed for overall scale components and individual items via intraclass correlation coefficients. Coder pair-specific reliability was also assessed. Interrater reliability was excellent overall for the scale components (>.85) and nearly all items. Reliability was higher for more experienced coders, though still adequate for the trained coder pair. Descriptive data demonstrated a broad range of adherence and skills. Subscale correlations supported concurrent and discriminant validity. Data provide evidence that the MD3 SBIRT Coding Scale is a psychometrically reliable coding system for evaluating SBIRT interactions and can be used to evaluate implementation skills for fidelity, training, assessment, and research. Recommendations for refinement and further testing of the measure are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  1. The search for person-related information in general practice: a qualitative study.

    PubMed

    Schrans, Diego; Avonts, Dirk; Christiaens, Thierry; Willems, Sara; de Smet, Kaat; van Boven, Kees; Boeckxstaens, Pauline; Kühlein, Thomas

    2016-02-01

    General practice is person-focused. Contextual information influences the clinical decision-making process in primary care. Currently, person-related information (PeRI) is neither recorded in a systematic way nor coded in the electronic medical record (EMR), and therefore not usable for scientific use. To search for classes of PeRI influencing the process of care. GPs, from nine countries worldwide, were asked to write down narrative case histories where personal factors played a role in decision-making. In an inductive process, the case histories were consecutively coded according to classes of PeRI. The classes found were deductively applied to the following cases and refined, until saturation was reached. Then, the classes were grouped into code-families and further clustered into domains. The inductive analysis of 32 case histories resulted in 33 defined PeRI codes, classifying all personal-related information in the cases. The 33 codes were grouped in the following seven mutually exclusive code-families: 'aspects between patient and formal care provider', 'social environment and family', 'functioning/behaviour', 'life history/non-medical experiences', 'personal medical information', 'socio-demographics' and 'work-/employment-related information'. The code-families were clustered into four domains: 'social environment and extended family', 'medicine', 'individual' and 'work and employment'. As PeRI is used in the process of decision-making, it should be part of the EMR. The PeRI classes we identified might form the basis of a new contextual classification mainly for research purposes. This might help to create evidence of the person-centredness of general practice. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  3. Beyond crosswalks: reliability of exposure assessment following automated coding of free-text job descriptions for occupational epidemiology.

    PubMed

    Burstyn, Igor; Slutsky, Anton; Lee, Derrick G; Singer, Alison B; An, Yuan; Michael, Yvonne L

    2014-05-01

    Epidemiologists typically collect narrative descriptions of occupational histories because these are less prone than self-reported exposures to recall bias of exposure to a specific hazard. However, the task of coding these narratives can be daunting and prohibitively time-consuming in some settings. The aim of this manuscript is to evaluate the performance of a computer algorithm to translate the narrative description of occupational codes into standard classification of jobs (2010 Standard Occupational Classification) in an epidemiological context. The fundamental question we address is whether exposure assignment resulting from manual (presumed gold standard) coding of the narratives is materially different from that arising from the application of automated coding. We pursued our work through three motivating examples: assessment of physical demands in Women's Health Initiative observational study, evaluation of predictors of exposure to coal tar pitch volatiles in the US Occupational Safety and Health Administration's (OSHA) Integrated Management Information System, and assessment of exposure to agents known to cause occupational asthma in a pregnancy cohort. In these diverse settings, we demonstrate that automated coding of occupations results in assignment of exposures that are in reasonable agreement with results that can be obtained through manual coding. The correlation between physical demand scores based on manual and automated job classification schemes was reasonable (r = 0.5). The agreement between predictive probability of exceeding the OSHA's permissible exposure level for polycyclic aromatic hydrocarbons, using coal tar pitch volatiles as a surrogate, based on manual and automated coding of jobs was modest (Kendall rank correlation = 0.29). In the case of binary assignment of exposure to asthmagens, we observed that fair to excellent agreement in classifications can be reached, depending on presence of ambiguity in assigned job classification (κ = 0.5-0.8). Thus, the success of automated coding appears to depend on the setting and type of exposure that is being assessed. Our overall recommendation is that automated translation of short narrative descriptions of jobs for exposure assessment is feasible in some settings and essential for large cohorts, especially if combined with manual coding to both assess reliability of coding and to further refine the coding algorithm.

  4. Modeling Thermal Noise From Crystalline Coatings For Gravitational-Wave Detectors

    NASA Astrophysics Data System (ADS)

    Demos, Nicholas; Lovelace, Geoffrey; LSC Collaboration

    2017-01-01

    In 2015, Advanced LIGO made the first direct detection of gravitational waves. The sensitivity of current and future ground-based gravitational-wave detectors is limited by thermal noise in each detector's test mass substrate and coating. This noise can be modeled using the fluctuation-dissipation theorem, which relates thermal noise to an auxiliary elastic problem. I will present results from a new code that numerically models thermal noise for different crystalline mirror coatings. The thermal noise in crystalline mirror coatings could be significantly lower but is challenging to model analytically. The code uses a finite element method with adaptive mesh refinement to model the auxiliary elastic problem which is then related to thermal noise. Specifically, I will show results for a crystal coating on an amorphous substrate of varying sizes and elastic properties. This and future work will help develop the next generation of ground-based gravitational-wave detectors.

  5. Contributions to HiLiftPW-3 Using Structured, Overset Grid Methods

    NASA Technical Reports Server (NTRS)

    Coder, James G.; Pulliam, Thomas H.; Jensen, James C.

    2018-01-01

    The High-Lift Common Research Model (HL-CRM) and the JAXA Standard Model (JSM) were analyzed computationally using both the OVERFLOW and LAVA codes for the third AIAA High-Lift Prediction Workshop. Geometry descriptions and the test cases simulated are described. With the HL-CRM, the effects of surface smoothness during grid projection and the effect of partially sealing a flap gap were studied. Grid refinement studies were performed at two angles of attack using both codes. For the JSM, simulations were performed with and without the nacelle/pylon. Without the nacelle/pylon, evidence of multiple solutions was observed when a quadratic constitutive relation is used in the turbulence modeling; however, using time-accurate simulation seemed to alleviate this issue. With the nacelle/pylon, no evidence of multiple solutions was observed. Laminar-turbulent transition modeling was applied to both JSM configuration, and had an overall favorable impact on the lift predictions.

  6. GPCRdb: an information system for G protein-coupled receptors

    PubMed Central

    Isberg, Vignir; Mordalski, Stefan; Munk, Christian; Rataj, Krzysztof; Harpsøe, Kasper; Hauser, Alexander S.; Vroling, Bas; Bojarski, Andrzej J.; Vriend, Gert; Gloriam, David E.

    2016-01-01

    Recent developments in G protein-coupled receptor (GPCR) structural biology and pharmacology have greatly enhanced our knowledge of receptor structure-function relations, and have helped improve the scientific foundation for drug design studies. The GPCR database, GPCRdb, serves a dual role in disseminating and enabling new scientific developments by providing reference data, analysis tools and interactive diagrams. This paper highlights new features in the fifth major GPCRdb release: (i) GPCR crystal structure browsing, superposition and display of ligand interactions; (ii) direct deposition by users of point mutations and their effects on ligand binding; (iii) refined snake and helix box residue diagram looks; and (iii) phylogenetic trees with receptor classification colour schemes. Under the hood, the entire GPCRdb front- and back-ends have been re-coded within one infrastructure, ensuring a smooth browsing experience and development. GPCRdb is available at http://www.gpcrdb.org/ and it's open source code at https://bitbucket.org/gpcr/protwis. PMID:26582914

  7. Real-space processing of helical filaments in SPARX

    PubMed Central

    Behrmann, Elmar; Tao, Guozhi; Stokes, David L.; Egelman, Edward H.; Raunser, Stefan; Penczek, Pawel A.

    2012-01-01

    We present a major revision of the iterative helical real-space refinement (IHRSR) procedure and its implementation in the SPARX single particle image processing environment. We built on over a decade of experience with IHRSR helical structure determination and we took advantage of the flexible SPARX infrastructure to arrive at an implementation that offers ease of use, flexibility in designing helical structure determination strategy, and high computational efficiency. We introduced the 3D projection matching code which now is able to work with non-cubic volumes, the geometry better suited for long helical filaments, we enhanced procedures for establishing helical symmetry parameters, and we parallelized the code using distributed memory paradigm. Additional feature includes a graphical user interface that facilitates entering and editing of parameters controlling the structure determination strategy of the program. In addition, we present a novel approach to detect and evaluate structural heterogeneity due to conformer mixtures that takes advantage of helical structure redundancy. PMID:22248449

  8. Computer simulations of phase field drops on super-hydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Fedeli, Livio

    2017-09-01

    We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.

  9. Evaluating a Control System Architecture Based on a Formally Derived AOCS Model

    NASA Astrophysics Data System (ADS)

    Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas

    2010-08-01

    Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.

  10. SSME Bearing and Seal Tester Data Compilation, Analysis and Reporting; and Refinement of the Cryogenic Bearing Analysis Mathematical Model

    NASA Technical Reports Server (NTRS)

    Moore, James; Marty, Dave; Cody, Joe

    2000-01-01

    SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.

  11. SSME Bearing and Seal Tester Data Compilation, Analysis, and Reporting; and Refinement of the Cryogenic Bearing Analysis Mathematical Model

    NASA Technical Reports Server (NTRS)

    Moore, James; Marty, Dave; Cody, Joe

    2000-01-01

    SRS and NASA/MSFC have developed software with unique capabilities to couple bearing kinematic modeling with high fidelity thermal modeling. The core thermomechanical modeling software was developed by SRS and others in the late 1980's and early 1990's under various different contractual efforts. SRS originally developed software that enabled SHABERTH (Shaft Bearing Thermal Model) and SINDA (Systems Improved Numerical Differencing Analyzer) to exchange data and autonomously allowing bearing component temperature effects to propagate into the steady state bearing mechanical model. A separate contract was issued in 1990 to create a personal computer version of the software. At that time SRS performed major improvements to the code. Both SHABERTH and SINDA were independently ported to the PC and compiled. SRS them integrated the two programs into a single program that was named SINSHA. This was a major code improvement.

  12. Analysis of rotor vibratory loads using higher harmonic pitch control

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.

    1992-01-01

    Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.

  13. A comparison of the International Classification of Functioning, Disability, and Health to the disability tax credit.

    PubMed

    Conti-Becker, Angela; Doralp, Samantha; Fayed, Nora; Kean, Crystal; Lencucha, Raphael; Leyshon, Rhysa; Mersich, Jackie; Robbins, Shawn; Doyle, Phillip C

    2007-01-01

    The Disability Tax Credit (DTC) Certification is an assessment tool used to provide Canadians with disability tax relief The International Classification of Functioning, Disability and Health (ICF) provides a universal framework for defining disability. The purpose of this study was to evaluate the DTC and familiarize occupational therapists with the process of mapping measures to the ICF classification system. Concepts within the DTC were identified and mapped to appropriate ICF codes (Cieza et al., 2005). The DTC was linked to 45 unique ICF codes (16 Body Functions, 19 Activities and Participation, and 8 Environmental Factors). The DTC encompasses various domains of the ICF; however, there is no consideration of Personal Factors, Body Structures, and key aspects of Activities and Participation. Refining the DTC to address these aspects will provide an opportunity for fair and just determinations for those who experience disability.

  14. [Preliminarily application of content analysis to qualitative nursing data].

    PubMed

    Liang, Shu-Yuan; Chuang, Yeu-Hui; Wu, Shu-Fang

    2012-10-01

    Content analysis is a methodology for objectively and systematically studying the content of communication in various formats. Content analysis in nursing research and nursing education is called qualitative content analysis. Qualitative content analysis is frequently applied to nursing research, as it allows researchers to determine categories inductively and deductively. This article examines qualitative content analysis in nursing research from theoretical and practical perspectives. We first describe how content analysis concepts such as unit of analysis, meaning unit, code, category, and theme are used. Next, we describe the basic steps involved in using content analysis, including data preparation, data familiarization, analysis unit identification, creating tentative coding categories, category refinement, and establishing category integrity. Finally, this paper introduces the concept of content analysis rigor, including dependability, confirmability, credibility, and transferability. This article elucidates the content analysis method in order to help professionals conduct systematic research that generates data that are informative and useful in practical application.

  15. Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.

    PubMed

    Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca

    2017-08-01

    Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.

  16. Clinical indicators for routine use in the evaluation of early psychosis intervention: development, training support and inter-rater reliability.

    PubMed

    Catts, Stanley V; Frost, Aaron D J; O'Toole, Brian I; Carr, Vaughan J; Lewin, Terry; Neil, Amanda L; Harris, Meredith G; Evans, Russell W; Crissman, Belinda R; Eadie, Kathy

    2011-01-01

    Clinical practice improvement carried out in a quality assurance framework relies on routinely collected data using clinical indicators. Herein we describe the development, minimum training requirements, and inter-rater agreement of indicators that were used in an Australian multi-site evaluation of the effectiveness of early psychosis (EP) teams. Surveys of clinician opinion and face-to-face consensus-building meetings were used to select and conceptually define indicators. Operationalization of definitions was achieved by iterative refinement until clinicians could be quickly trained to code indicators reliably. Calculation of percentage agreement with expert consensus coding was based on ratings of paper-based clinical vignettes embedded in a 2-h clinician training package. Consensually agreed upon conceptual definitions for seven clinical indicators judged most relevant to evaluating EP teams were operationalized for ease-of-training. Brief training enabled typical clinicians to code indicators with acceptable percentage agreement (60% to 86%). For indicators of suicide risk, psychosocial function, and family functioning this level of agreement was only possible with less precise 'broad range' expert consensus scores. Estimated kappa values indicated fair to good inter-rater reliability (kappa > 0.65). Inspection of contingency tables (coding category by health service) and modal scores across services suggested consistent, unbiased coding across services. Clinicians are able to agree upon what information is essential to routinely evaluate clinical practice. Simple indicators of this information can be designed and coding rules can be reliably applied to written vignettes after brief training. The real world feasibility of the indicators remains to be tested in field trials.

  17. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.

  18. Comparative analysis of three-dimensional structures of homodimers of uridine phosphorylase from Salmonella typhimurium in the unligated state and in a complex with potassium ion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lashkov, A. A.; Zhukhlistova, N. E.; Gabdulkhakov, A. G.

    2009-03-15

    The spatial organization of the homodimer of unligated uridine phosphorylase from Salmonella typhimurium (St UPh) was determined with high accuracy. The structure was refined at 1.80 A resolution to R{sub work} = 16.1% and R{sub free} = 20.0%. The rms deviations for the bond lengths, bond angles, and chiral angles are 0.006 A, 1.042{sup o}, and 0.071{sup o}, respectively. The coordinate error estimated by the Luzzati plot is 0.166 A. The coordinate error based on the maximum likelihood is 0.199 A. A comparative analysis of the spatial organization of the homodimer in two independently refined structures and the structure ofmore » the homodimer St UPh in the complex with a K{sup +} ion was performed. The substrate-binding sites in the homodimers StUPhs in the unligated state were found to act asynchronously. In the presence of a potassium ion, the three-dimensional structures of the subunits in the homodimer are virtually identical, which is apparently of importance for the synchronous action of both substrate-binding sites. The atomic coordinates of the refined structure of the homodimer and structure factors have been deposited in the Protein Data Bank (PDB ID code 3DPS).« less

  19. International Spinal Cord Injury Data Sets for non-traumatic spinal cord injury.

    PubMed

    New, P W; Marshall, R

    2014-02-01

    Multifaceted: extensive discussions at workshop and conference presentations, survey of experts and feedback. Present the background, purpose and development of the International Spinal Cord Injury (SCI) Data Sets for Non-Traumatic SCI (NTSCI), including a hierarchical classification of aetiology. International. Consultation via e-mail, presentations and discussions at ISCoS conferences (2006-2009), and workshop (1 September 2008). The consultation processes aimed to: (1) clarify aspects of the classification structure, (2) determine placement of certain aetiologies and identify important missing causes of NTSCI and (3) resolve coding issues and refine definitions. Every effort was made to consider feedback and suggestions from participants. The International Data Sets for NTSCI includes basic and an extended versions. The extended data set includes a two-axis classification system for the causes of NTSCI. Axis 1 consists of a five-level, two-tier (congenital-genetic and acquired) hierarchy that allows for increasing detail to specify the aetiology. Axis 2 uses the International Statistical Classification of Diseases (ICD) and Related Health Problems for coding the initiating diseases(s) that may have triggered the events that resulted in the axis 1 diagnosis, where appropriate. Additional items cover the timeframe of onset of NTSCI symptoms and presence of iatrogenicity. Complete instructions for data collection, data sheet and training cases are available at the websites of ISCoS (http://www.iscos.org.uk) and ASIA (http://www.asia-spinalinjury.org). The data sets should facilitate comparative research involving NTSCI participants, especially epidemiological studies and prevention projects. Further work is anticipated to refine the data sets, particularly regarding iatrogenicity.

  20. Probing the Curious Case of a Galaxy Cluster Merger in Abell 115 with High-fidelity Chandra X-Ray Temperature and Radio Maps

    NASA Astrophysics Data System (ADS)

    Hallman, Eric J.; Alden, Brian; Rapetti, David; Datta, Abhirup; Burns, Jack O.

    2018-05-01

    We present results from an X-ray and radio study of the merging galaxy cluster Abell 115. We use the full set of five Chandra observations taken of A115 to date (360 ks total integration) to construct high-fidelity temperature and surface brightness maps. We also examine radio data from the Very Large Array at 1.5 GHz and the Giant Metrewave Radio Telescope at 0.6 GHz. We propose that the high X-ray spectral temperature between the subclusters results from the interaction of the bow shocks driven into the intracluster medium by the motion of the subclusters relative to one another. We have identified morphologically similar scenarios in Enzo numerical N-body/hydrodynamic simulations of galaxy clusters in a cosmological context. In addition, the giant radio relic feature in A115, with an arc-like structure and a relatively flat spectral index, is likely consistent with other shock-associated giant radio relics seen in other massive galaxy clusters. We suggest a dynamical scenario that is consistent with the structure of the X-ray gas, the hot region between the clusters, and the radio relic feature.

  1. Decision Making and the IACUC: Part 1—Protocol Information Discussed at Full-Committee Reviews

    PubMed Central

    Silverman, Jerald; Lidz, Charles W; Clayfield, Jonathan C; Murray, Alexandra; Simon, Lorna J; Rondeau, Richard G

    2015-01-01

    IACUC protocols can be reviewed by either the full committee or designated members. Both review methods use the principles of the 3 Rs (reduce, refine, replace) as the overarching paradigm, with federal regulations and policies providing more detailed guidance. The primary goal of this study was to determine the frequency of topics discussed by IACUC during full-committee reviews and whether the topics included those required for consideration by IACUC (for example, pain and distress, number of animals used, availability of alternatives, skill and experience of researchers). We recorded and transcribed 87 protocol discussions undergoing full-committee review at 10 academic institutions. Each transcript was coded to capture the key concepts of the discussion and analyzed for the frequency of the codes mentioned. Pain and distress was the code mentioned most often, followed by the specific procedures performed, the study design, and the completeness of the protocol form. Infrequently mentioned topics were alternatives to animal use or painful or distressful procedures, the importance of the research, and preliminary data. Not all of the topics required to be considered by the IACUC were openly discussed for all protocols, and many of the discussions were limited in their depth. PMID:26224439

  2. Chromosome mapping of the human arrestin (SAG), {beta}-arrestin 2 (ARRB2), and {beta}-adrenergic receptor kinase 2 (ADRBK2) genes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, G.; Sallese, M.; Stornaiuolo, A.

    1994-09-01

    Two types of proteins play a major role in determining homologous desensitization of G-coupled receptors: {beta}-adrenergic receptor kinase ({beta}ARK), which phosphorylates the agonist-occupied receptor and its functional cofactor, {beta}-arrestin. Both {beta}ARK and {beta}-arrestin are members of multigene families. The family of G-protein-coupled receptor kinases includes rhodopsin kinase, {beta}ARK1, {beta}ARK2, IT11-A (GRK4), GRK5, and GRK6. The arrestin/{beta}-arrestin gene family includes arrestin (also known as S-antigen), {beta}-arrestin 1, and {beta}-arrestin 2. Here we report the chromosome mapping of the human genes for arrestin (SAG), {beta}arrestin 2 (ARRB2), and {beta}ARK2 (ADRBK2) by fluorescence in situ hybridization (FISH). FISH results confirmed the assignment ofmore » the gene coding for arrestin (SAG) to chromosome 2 and allowed us to refine its localization to band q37. The gene coding for {beta}-arrestin 2 (ARRB2) was mapped to chromosome 17p13 and that coding for {beta}ARK2 (ADRBK2) to chromosome 22q11. 17 refs., 1 fig.« less

  3. Intellectual system of identification of Arabic graphics

    NASA Astrophysics Data System (ADS)

    Abdoullayeva, Gulchin G.; Aliyev, Telman A.; Gurbanova, Nazakat G.

    2001-08-01

    The studies made by using the domain of graphic images allowed creating facilities of the artificial intelligence for letters, letter combinations etc. for various graphics and prints. The work proposes a system of recognition and identification of symbols of the Arabic graphics, which has its own specificity as compared to Latin and Cyrillic ones. The starting stage of the recognition and the identification is coding with further entry of information into a computer. Here the problem of entry is one of the essentials. For entry of a large volume of information in the unit of time a scanner is usually employed. Along with the scanner the authors suggest their elaboration of technical facilities for effective input and coding of the information. For refinement of symbols not identified from the scanner mostly for a small bulk of information the developed coding devices are used directly in the process of writing. The functional design of the software is elaborated on the basis of the heuristic model of the creative activity of a researcher and experts in the description and estimation of states of the weakly formalizable systems on the strength of the methods of identification and of selection of geometric features.

  4. Integration of design, structural, thermal and optical analysis: And user's guide for structural-to-optical translator (PATCOD)

    NASA Technical Reports Server (NTRS)

    Amundsen, R. M.; Feldhaus, W. S.; Little, A. D.; Mitchum, M. V.

    1995-01-01

    Electronic integration of design and analysis processes was achieved and refined at Langley Research Center (LaRC) during the development of an optical bench for a laser-based aerospace experiment. Mechanical design has been integrated with thermal, structural and optical analyses. Electronic import of the model geometry eliminates the repetitive steps of geometry input to develop each analysis model, leading to faster and more accurate analyses. Guidelines for integrated model development are given. This integrated analysis process has been built around software that was already in use by designers and analysis at LaRC. The process as currently implemented used Pro/Engineer for design, Pro/Manufacturing for fabrication, PATRAN for solid modeling, NASTRAN for structural analysis, SINDA-85 and P/Thermal for thermal analysis, and Code V for optical analysis. Currently, the only analysis model to be built manually is the Code V model; all others can be imported for the Pro/E geometry. The translator from PATRAN results to Code V optical analysis (PATCOD) was developed and tested at LaRC. Directions for use of the translator or other models are given.

  5. A Code of Ethics and Standards for Outer-Space Commerce

    NASA Astrophysics Data System (ADS)

    Livingston, David M.

    2002-01-01

    Now is the time to put forth an effective code of ethics for businesses in outer space. A successful code would be voluntary and would actually promote the growth of individual companies, not hinder their efforts to provide products and services. A properly designed code of ethics would ensure the development of space commerce unfettered by government-created barriers. Indeed, if the commercial space industry does not develop its own professional code of ethics, government- imposed regulations would probably be instituted. Should this occur, there is a risk that the development of off-Earth commerce would become more restricted. The code presented in this paper seeks to avoid the imposition of new barriers to space commerce as well as make new commercial space ventures easier to develop. The proposed code consists of a preamble, which underscores basic values, followed by a number of specific principles. For the most part, these principles set forth broad commitments to fairness and integrity with respect to employees, consumers, business transactions, political contributions, natural resources, off-Earth development, designated environmental protection zones, as well as relevant national and international laws. As acceptance of this code of ethics grows within the industry, general modifications will be necessary to accommodate the different types of businesses entering space commerce. This uniform applicability will help to assure that the code will not be perceived as foreign in nature, potentially restrictive, or threatening. Companies adopting this code of ethics will find less resistance to their space development plans, not only in the United States but also from nonspacefaring nations. Commercial space companies accepting and refining this code would demonstrate industry leadership and an understanding that will serve future generations living, working, and playing in space. Implementation of the code would also provide an off-Earth precedent for a modified free-market economy. With the code as a backdrop, a colonial or Wild West mentality would become less likely. Off-Earth resources would not be as susceptible to plunder and certain areas could be designated as environmental reserves for the benefit of all. Companies would find it advantageous to balance the goal of wealth maximization with ethical principles if such a strategy enhances the long-term prospects for success.

  6. Implementing Subduction Models in the New Mantle Convection Code Aspect

    NASA Astrophysics Data System (ADS)

    Arredondo, Katrina; Billen, Magali

    2014-05-01

    The geodynamic community has utilized various numerical modeling codes as scientific questions arise and computer processing power increases. Citcom, a widely used mantle convection code, has limitations and vulnerabilities such as temperature overshoots of hundreds or thousands degrees Kelvin (i.e., Kommu et al., 2013). Recently Aspect intended as a more powerful cousin, is in active development with additions such as Adaptable Mesh Refinement (AMR) and improved solvers (Kronbichler et al., 2012). The validity and ease of use of Aspect is important to its survival and role as a possible upgrade and replacement to Citcom. Development of publishable models illustrates the capacity of Aspect. We present work on the addition of non-linear solvers and stress-dependent rheology to Aspect. With a solid foundational knowledge of C++, these additions were easily added into Aspect and tested against CitcomS. Time-dependent subduction models akin to those in Billen and Hirth (2007) are built and compared in CitcomS and Aspect. Comparison with CitcomS assists in Aspect development and showcases its flexibility, usability and capabilities. References: Billen, M. I., and G. Hirth, 2007. Rheologic controls on slab dynamics. Geochemistry, Geophysics, Geosystems. Kommu, R., E. Heien, L. H. Kellogg, W. Bangerth, T. Heister, E. Studley, 2013. The Overshoot Phenomenon in Geodynamics Codes. American Geophysical Union Fall Meeting. M. Kronbichler, T. Heister, W. Bangerth, 2012, High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophys. J. Int.

  7. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Energy Cost Impact of Non-Residential Energy Code Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jian; Hart, Philip R.; Rosenberg, Michael I.

    2016-08-22

    The 2012 International Energy Conservation Code contains 396 separate requirements applicable to non-residential buildings; however, there is no systematic analysis of the energy cost impact of each requirement. Consequently, limited code department budgets for plan review, inspection, and training cannot be focused on the most impactful items. An inventory and ranking of code requirements based on their potential energy cost impact is under development. The initial phase focuses on office buildings with simple HVAC systems in climate zone 4C. Prototype building simulations were used to estimate the energy cost impact of varying levels of non-compliance. A preliminary estimate of themore » probability of occurrence of each level of non-compliance was combined with the estimated lost savings for each level to rank the requirements according to expected savings impact. The methodology to develop and refine further energy cost impacts, specific to building type, system type, and climate location is demonstrated. As results are developed, an innovative alternative method for compliance verification can focus efforts so only the most impactful requirements from an energy cost perspective are verified for every building and a subset of the less impactful requirements are verified on a random basis across a building population. The results can be further applied in prioritizing training material development and specific areas of building official training.« less

  9. A toolbox of lectins for translating the sugar code: the galectin network in phylogenesis and tumors.

    PubMed

    Kaltner, H; Gabius, H-J

    2012-04-01

    Lectin histochemistry has revealed cell-type-selective glycosylation. It is under dynamic and spatially controlled regulation. Since their chemical properties allow carbohydrates to reach unsurpassed structural diversity in oligomers, they are ideal for high density information coding. Consequently, the concept of the sugar code assigns a functional dimension to the glycans of cellular glycoconjugates. Indeed, multifarious cell processes depend on specific recognition of glycans by their receptors (lectins), which translate the sugar-encoded information into effects. Duplication of ancestral genes and the following divergence of sequences account for the evolutionary dynamics in lectin families. Differences in gene number can even appear among closely related species. The adhesion/growth-regulatory galectins are selected as an instructive example to trace the phylogenetic diversification in several animals, most of them popular models in developmental and tumor biology. Chicken galectins are identified as a low-level-complexity set, thus singled out for further detailed analysis. The various operative means for establishing protein diversity among the chicken galectins are delineated, and individual characteristics in expression profiles discerned. To apply this galectin-fingerprinting approach in histopathology has potential for refining differential diagnosis and for obtaining prognostic assessments. On the grounds of in vitro work with tumor cells a strategically orchestrated co-regulation of galectin expression with presentation of cognate glycans is detected. This coordination epitomizes the far-reaching physiological significance of sugar coding.

  10. Code Conversion Impact Factor and Cash Flow Impact of International Classification of Diseases, 10th Revision, on a Large Multihospital Radiology Practice.

    PubMed

    Jalilvand, Aryan; Fleming, Margaret; Moreno, Courtney; MacFarlane, Dan; Duszak, Richard

    2018-01-01

    The 2015 conversion of the International Classification of Diseases (ICD) system from the ninth revision (ICD-9) to the 10th revision (ICD-10) was widely projected to adversely impact physician practices. We aimed to assess code conversion impact factor (CCIF) projections and revenue delay impact to help radiology groups better prepare for eventual conversion to ICD, 11th revision (ICD-11). Studying 673,600 claims for 179 radiologists for the first year after ICD-10's implementation, we identified primary ICD-10 codes for the top 90th percentile of all examinations for the entire enterprise and each subspecialty division. Using established methodology, we calculated CCIFs (actual ICD-10 codes ÷ prior ICD-9 codes). To assess ICD-10's impact on cash flow, average monthly days in accounts receivable status was compared for the 12 months before and after conversion. Of all 69,823 ICD-10 codes, only 7,075 were used to report primary diagnoses across the entire practice, and just 562 were used to report 90% of all claims, compared with 348 under ICD-9. This translates to an overall CCIF of 1.6 for the department (far less than the literature-predicted 6). By subspecialty division, CCIFs ranged from 0.7 (breast) to 3.5 (musculoskeletal). Monthly average days in accounts receivable for the 12 months before and after ICD-10 conversion did not increase. The operational impact of the ICD-10 transition on radiology practices appears far less than anticipated with respect to both CCIF and delays in cash flow. Predictive models should be refined to help practices better prepare for ICD-11. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masters, N D; Kaiser, T B; Anderson, R W

    2009-09-28

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.

  12. Data structures supporting multi-region adaptive isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Perduta, Anna; Putanowicz, Roman

    2018-01-01

    Since the first paper published in 2005 Isogeometric Analysis (IGA) has gained strong interest and found applications in many engineering problems. Despite the advancement of the method, there are still far fewer software implementations comparing to Finite Element Method. The paper presents an approach to the development of data structures that can support multi-region IGA with local mesh refinement (patch-based) and possible application in IGA-FEM models. The purpose of this paper is to share original design concepts, that authors have created while developing an IGA package, which other researchers may find beneficial for their own simulation codes.

  13. A 3D front tracking method on a CPU/GPU system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  14. Geometrically Nonlinear Shell Analysis of Wrinkled Thin-Film Membranes with Stress Concentrations

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Sleight, David W.

    2006-01-01

    Geometrically nonlinear shell finite element analysis has recently been applied to solar-sail membrane problems in order to model the out-of-plane deformations due to structural wrinkling. Whereas certain problems lend themselves to achieving converged nonlinear solutions that compare favorably with experimental observations, solutions to tensioned membranes exhibiting high stress concentrations have been difficult to obtain even with the best nonlinear finite element codes and advanced shell element technology. In this paper, two numerical studies are presented that pave the way to improving the modeling of this class of nonlinear problems. The studies address the issues of mesh refinement and stress-concentration alleviation, and the effects of these modeling strategies on the ability to attain converged nonlinear deformations due to wrinkling. The numerical studies demonstrate that excessive mesh refinement in the regions of stress concentration may be disadvantageous to achieving wrinkled equilibrium states, causing the nonlinear solution to lock in the membrane response mode, while totally discarding the very low-energy bending response that is necessary to cause wrinkling deformation patterns.

  15. Refinement of the NHS locus on chromosome Xp22.13 and analysis of five candidate genes.

    PubMed

    Toutain, Annick; Dessay, Benoît; Ronce, Nathalie; Ferrante, Maria-Immacolata; Tranchemontagne, Julie; Newbury-Ecob, Ruth; Wallgren-Pettersson, Carina; Burn, John; Kaplan, Josseline; Rossi, Annick; Russo, Silvia; Walpole, Ian; Hartsfield, James K; Oyen, Nina; Nemeth, Andrea; Bitoun, Pierre; Trump, Dorothy; Moraine, Claude; Franco, Brunella

    2002-09-01

    Nance-Horan syndrome (NHS) is an X-linked condition characterised by congenital cataracts, dental abnormalities, dysmorphic features, and mental retardation in some cases. Previous studies have mapped the disease gene to a 2 cM interval on Xp22.2 between DXS43 and DXS999. We report additional linkage data resulting from the analysis of eleven independent NHS families. A maximum lod score of 9.94 (theta=0.00) was obtained at the RS1 locus and a recombination with locus DXS1195 on the telomeric side was observed in two families, thus refining the location of the gene to an interval of around 1 Mb on Xp22.13. Direct sequencing or SSCP analysis of the coding exons of five genes (SCML1, SCML2, STK9, RS1 and PPEF1), considered as candidate genes on the basis of their location in the critical interval, failed to detect any mutation in 12 unrelated NHS patients, thus making it highly unlikely that these genes are implicated in NHS.

  16. Atomic structure of unligated laccase from Cerrena maxima at 1.76 A with molecular oxygen and hydrogen peroxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukova, Yu. N., E-mail: amm@ns.crys.ras.ru; Lyashenko, A. V.; Lashkov, A. A.

    2010-05-15

    The three-dimensional structure of unligated laccase from Cerrena maxima was established by X-ray diffraction at 1.76-A resolution; R{sub work} = 18.07%, R{sub free} = 21.71%, rmsd of bond lengths, bond angles, and chiral angles are 0.008 A, 1.19{sup o}, and 0.077{sup o}, respectively. The coordinate error for the refined structure estimated from the Luzzati plot is 0.195 A. The maximum average error in the atomic coordinates is 0.047 A. A total of 99.4% of amino-acid residues of the polypeptide chain are in the most favorable, allowable, and accessible regions of the Ramachandran plot. The three-dimensional structures of the complexes ofmore » laccase from C. maxima with molecular oxygen and hydrogen peroxide were determined by the molecular simulation. These data provide insight into the structural aspect of the mechanism of the enzymatic cycle. The structure factors and the refined atomic coordinates were deposited in the Protein Data Bank (PDB-ID code is 3DIV).« less

  17. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  18. Numerical Study of Richtmyer-Meshkov Instability with Re-Shock

    NASA Astrophysics Data System (ADS)

    Wong, Man Long; Livescu, Daniel; Lele, Sanjiva

    2017-11-01

    The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.

  19. Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.; Ovall, J.; Holst, M.

    2014-12-01

    We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.

  20. Three dimensional adaptive mesh refinement on a spherical shell for atmospheric models with lagrangian coordinates

    NASA Astrophysics Data System (ADS)

    Penner, Joyce E.; Andronova, Natalia; Oehmke, Robert C.; Brown, Jonathan; Stout, Quentin F.; Jablonowski, Christiane; van Leer, Bram; Powell, Kenneth G.; Herzog, Michael

    2007-07-01

    One of the most important advances needed in global climate models is the development of atmospheric General Circulation Models (GCMs) that can reliably treat convection. Such GCMs require high resolution in local convectively active regions, both in the horizontal and vertical directions. During previous research we have developed an Adaptive Mesh Refinement (AMR) dynamical core that can adapt its grid resolution horizontally. Our approach utilizes a finite volume numerical representation of the partial differential equations with floating Lagrangian vertical coordinates and requires resolving dynamical processes on small spatial scales. For the latter it uses a newly developed general-purpose library, which facilitates 3D block-structured AMR on spherical grids. The library manages neighbor information as the blocks adapt, and handles the parallel communication and load balancing, freeing the user to concentrate on the scientific modeling aspects of their code. In particular, this library defines and manages adaptive blocks on the sphere, provides user interfaces for interpolation routines and supports the communication and load-balancing aspects for parallel applications. We have successfully tested the library in a 2-D (longitude-latitude) implementation. During the past year, we have extended the library to treat adaptive mesh refinement in the vertical direction. Preliminary results are discussed. This research project is characterized by an interdisciplinary approach involving atmospheric science, computer science and mathematical/numerical aspects. The work is done in close collaboration between the Atmospheric Science, Computer Science and Aerospace Engineering Departments at the University of Michigan and NOAA GFDL.

  1. Measurement and prediction of the thermomechanical response of shape memory alloy hybrid composite beams

    NASA Astrophysics Data System (ADS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-05-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  2. Measurement and Prediction of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams

    NASA Technical Reports Server (NTRS)

    Davis, Brian; Turner, Travis L.; Seelecke, Stefan

    2005-01-01

    Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons within the beam structures. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical analysis/design tool. The experimental investigation is refined by a more thorough test procedure and incorporation of higher fidelity measurements such as infrared thermography and projection moire interferometry. The numerical results are produced by a recently commercialized version of the constitutive model as implemented in ABAQUS and are refined by incorporation of additional measured parameters such as geometric imperfection. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. The results demonstrate the effectiveness of SMAHC structures in controlling static and dynamic responses by adaptive stiffening. Excellent agreement is achieved between the predicted and measured results of the static and dynamic thermomechanical response, thereby providing quantitative validation of the numerical tool.

  3. Mining Peripheral Arterial Disease Cases from Narrative Clinical Notes Using Natural Language Processing

    PubMed Central

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.

    2016-01-01

    Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359

  4. A Meshless Method for Magnetohydrodynamics and Applications to Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    McNally, Colin P.

    2012-08-01

    This thesis presents an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. The code has been parallelized by adapting the framework provided by Gadget-2. A set of standard test problems, including one part in a million amplitude linear MHD waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities are presented. Finally we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. We provide a rigorous methodology for verifying a numerical method on two dimensional Kelvin-Helmholtz instability. The test problem was run in the Pencil Code, Athena, Enzo, NDSPHMHD, and Phurbas. A strict comparison, judgment, or ranking, between codes is beyond the scope of this work, although this work provides the mathematical framewor! k needed for such a study. Nonetheless, how the test is posed circumvents the issues raised by tests starting from a sharp contact discontinuity yet it still shows the poor performance of Smoothed Particle Hydrodynamics. We then comment on the connection between this behavior and the underlying lack of zeroth-order consistency in Smoothed Particle Hydrodynamics interpolation. In astrophysical magnetohydrodynamics (MHD) and electrodynamics simulations, numerically enforcing the divergence free constraint on the magnetic field has been difficult. We observe that for point-based discretization, as used in finite-difference type and pseudo-spectral methods, the divergence free constraint can be satisfied entirely by a choice of interpolation used to define the derivatives of the magnetic field. As an example we demonstrate a new class of finite-difference type derivative operators on a regular grid which has the divergence free property. This principle clarifies the nature of magnetic monopole errors. The principles and techniques demonstrated in this chapter are particularly useful for the magnetic field, but can be applied to any vector field. Finally, we examine global zoom-in simulations of turbulent magnetorotationally unstable flow. We extract and analyze the high-current regions produced in the turbulent flow. Basic parameters of these regions are abstracted, and we build one dimensional models including non-ideal MHD, and radiative transfer. For sufficiently high temperatures, an instability resulting from the temperature dependence of the Ohmic resistivity is found. This instability concentrates current sheets, resulting in the possibility of rapid heating from temperatures on the order of 600 Kelvin to 2000 Kelvin in magnetorotationally turbulent regions of protoplanetary disks. This is a possible local mechanism for the melting of chondrules and the formation of other high-temperature materials in protoplanetary disks.

  5. General relativistic hydrodynamics with Adaptive-Mesh Refinement (AMR) and modeling of accretion disks

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.

  6. SAGE Validations of Volcanic Jet Simulations

    NASA Astrophysics Data System (ADS)

    Peterson, A. H.; Wohletz, K. H.; Ogden, D. E.; Gisler, G.; Glatzmaier, G.

    2006-12-01

    The SAGE (SAIC Adaptive Grid Eulerian) code employs adaptive mesh refinement in solving Eulerian equations of complex fluid flow desirable for simulation of volcanic eruptions. Preliminary eruption simulations demonstrate its ability to resolve multi-material flows over large domains where dynamics are concentrated in small regions. In order to validate further application of this code to numerical simulation of explosive eruption phenomena, we focus on one of the fundamental physical processes important to the problem, namely the dynamics of an underexpanded jet. Observations of volcanic eruption plumes and laboratory experiments on analog systems document the eruption of overpressured fluid in a supersonic jet that is governed by vent diameter and level of overpressure. The jet is dominated by inertia (very high Reynolds number) and feeds a thermally convective plume controlled by turbulent admixture of the atmosphere. The height above the vent at which the jet looses its inertia is important to know for convective plume predictions that are used to calculate atmospheric dispersal of volcanic products. We simulate a set of well documented laboratory experiments that provide detail on underexpanded jet structure by gas density contours, showing the shape and size of the Mach stem. SAGE results are within several percent of the experiments for position and density of the incident (intercepting) and reflected shocks, slip lines, shear layers, and Mach disk. The simulations also resolve vorticity at the jet margins near the Mach disk, showing turbulent velocity fields down to a scale of 30 micrometers. Benchmarking these results with those of CFDLib (Los Alamos National Laboratory), which solves the full Navier-Stokes equations (includes viscous stress tensor), shows close agreement, indicating that adaptive mesh refinement used in SAGE may offset the need for explicit calculation of viscous dissipation.

  7. ICD-9-CM and ICD-10-CM mapping of the AAST Emergency General Surgery disease severity grading systems: Conceptual approach, limitations, and recommendations for the future.

    PubMed

    Utter, Garth H; Miller, Preston R; Mowery, Nathan T; Tominaga, Gail T; Gunter, Oliver; Osler, Turner M; Ciesla, David J; Agarwal, Suresh K; Inaba, Kenji; Aboutanos, Michel B; Brown, Carlos V R; Ross, Steven E; Crandall, Marie L; Shafi, Shahid

    2015-05-01

    The American Association for the Surgery of Trauma (AAST) recently established a grading system for uniform reporting of anatomic severity of several emergency general surgery (EGS) diseases. There are five grades of severity for each disease, ranging from I (lowest severity) to V (highest severity). However, the grading process requires manual chart review. We sought to evaluate whether International Classification of Diseases, 9th and 10th Revisions, Clinical Modification (ICD-9-CM, ICD-10-CM) codes might allow estimation of AAST grades for EGS diseases. The Patient Assessment and Outcomes Committee of the AAST reviewed all available ICD-9-CM and ICD-10-CM diagnosis codes relevant to 16 EGS diseases with available AAST grades. We then matched grades for each EGS disease with one or more ICD codes. We used the Official Coding Guidelines for ICD-9-CM and ICD-10-CM and the American Hospital Association's "Coding Clinic for ICD-9-CM" for coding guidance. The ICD codes did not allow for matching all five AAST grades of severity for each of the 16 diseases. With ICD-9-CM, six diseases mapped into four categories of severity (instead of five), another six diseases into three categories of severity, and four diseases into only two categories of severity. With ICD-10-CM, five diseases mapped into four categories of severity, seven diseases into three categories, and four diseases into two categories. Two diseases mapped into discontinuous categories of grades (two in ICD-9-CM and one in ICD-10-CM). Although resolution is limited, ICD-9-CM and ICD-10-CM diagnosis codes might have some utility in roughly approximating the severity of the AAST grades in the absence of more precise information. These ICD mappings should be validated and refined before widespread use to characterize EGS disease severity. In the long-term, it may be desirable to develop alternatives to ICD-9-CM and ICD-10-CM codes for routine collection of disease severity characteristics.

  8. Efficient Skeletonization of Volumetric Objects.

    PubMed

    Zhou, Yong; Toga, Arthur W

    1999-07-01

    Skeletonization promises to become a powerful tool for compact shape description, path planning, and other applications. However, current techniques can seldom efficiently process real, complicated 3D data sets, such as MRI and CT data of human organs. In this paper, we present an efficient voxel-coding based algorithm for Skeletonization of 3D voxelized objects. The skeletons are interpreted as connected centerlines. consisting of sequences of medial points of consecutive clusters. These centerlines are initially extracted as paths of voxels, followed by medial point replacement, refinement, smoothness, and connection operations. The voxel-coding techniques have been proposed for each of these operations in a uniform and systematic fashion. In addition to preserving basic connectivity and centeredness, the algorithm is characterized by straightforward computation, no sensitivity to object boundary complexity, explicit extraction of ready-to-parameterize and branch-controlled skeletons, and efficient object hole detection. These issues are rarely discussed in traditional methods. A range of 3D medical MRI and CT data sets were used for testing the algorithm, demonstrating its utility.

  9. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.

  10. Modeling Thermal Noise from Crystaline Coatings for Gravitational-Wave Detectors

    NASA Astrophysics Data System (ADS)

    Demos, Nicholas; Lovelace, Geoffrey; LSC Collaboration

    2016-03-01

    The sensitivity of current and future ground-based gravitational-wave detectors are, in part, limited in sensitivity by Brownian and thermoelastic noise in each detector's mirror substrate and coating. Crystalline mirror coatings could potentially reduce thermal noise, but thermal noise is challenging to model analytically in the case of crystalline materials. Thermal noise can be modeled using the fluctuation-dissipation theorem, which relates thermal noise to an auxiliary elastic problem. In this poster, I will present results from a new code that numerically models thermal noise by numerically solving the auxiliary elastic problem for various types of crystalline mirror coatings. The code uses a finite element method with adaptive mesh refinement to model the auxiliary elastic problem which is then related to thermal noise. I will present preliminary results for a crystal coating on a fused silica substrate of varying sizes and elastic properties. This and future work will help develop the next generation of ground-based gravitational-wave detectors.

  11. Optical image cryptosystem using chaotic phase-amplitude masks encoding and least-data-driven decryption by compressive sensing

    NASA Astrophysics Data System (ADS)

    Lang, Jun; Zhang, Jing

    2015-03-01

    In our proposed optical image cryptosystem, two pairs of phase-amplitude masks are generated from the chaotic web map for image encryption in the 4f double random phase-amplitude encoding (DRPAE) system. Instead of transmitting the real keys and the enormous masks codes, only a few observed measurements intermittently chosen from the masks are delivered. Based on compressive sensing paradigm, we suitably refine the series expansions of web map equations to better reconstruct the underlying system. The parameters of the chaotic equations can be successfully calculated from observed measurements and then can be used to regenerate the correct random phase-amplitude masks for decrypting the encoded information. Numerical simulations have been performed to verify the proposed optical image cryptosystem. This cryptosystem can provide a new key management and distribution method. It has the advantages of sufficiently low occupation of the transmitted key codes and security improvement of information transmission without sending the real keys.

  12. A COMPARISON OF EXPERIMENTS AND THREE-DIMENSIONAL ANALYSIS TECHNIQUES. PART II. UNPOISONED UNIFORM SLAB CORE WITH A PARTIALLY INSERTED HAFNIUM ROD AND A PARTIALLY INSERTED WATER GAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseberry, R.J.

    The experimental measurements and nuclear analysis of a uniformly loaded, unpoisoned slab core with a partially inserted hafnium rod and/or a partially inserted water gap are described. Comparisons of experimental data with calculated results of the UFO core and flux synthesis techniques are given. It is concluded that one of the flux synthesis techniques and the UFO code are able to predict flux distributions to within approximately -5% of experiment for most cases, with a maximum error of approximately -10% for a channel at the core- reflector boundary. The second synthesis technique failed to give comparable agreement with experiment evenmore » when various refinements were used, e.g. increasing the number of mesh points, performing the flux synthesis technique of iteration, and spectrum-weighting the appropriate calculated fluxes through the use of the SWAKRAUM code. These results are comparable to those reported in Part I of this study. (auth)« less

  13. Modeling Solar Wind Flow with the Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Pogorelov, N.V.; Borovikov, S. N.; Bedford, M. C.; ...

    2013-04-01

    Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. We have enhanced the code with additional physical treatments for the transport of turbulence and acceleration of pickup ions in the interplanetary space and at the termination shock. In this article, we present themore » results of our numerical simulation of the solar wind (SW) interaction with the local interstellar medium (LISM) in different time-dependent and stationary formulations. Numerical results are compared with the Ulysses, Voyager, and OMNI observations. Finally, the SW boundary conditions are derived from in-situ spacecraft measurements and remote observations.« less

  14. Constraining the atmosphere of exoplanet WASP-34b

    NASA Astrophysics Data System (ADS)

    Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Garland, Justin; Foster, Andrew S. D.; Blecic, Jasmina; Foster, Austin James; Smalley, Barry

    2016-01-01

    WASP-34b is a short-period exoplanet with a mass of 0.59 +/- 0.01 Jupiter masses orbiting a G5 star with a period of 4.3177 days and an eccentricity of 0.038 +/- 0.012 (Smalley, 2010). We observed WASP-34b using the 3.6 and 4.5 micron channels of the Infrared Array Camera aboard the Spitzer Space Telescope in 2010 (Program 60003). We applied our Photometry for Orbits, Eclipses, and Transits (POET) code to present eclipse-depth measurements, estimates of infrared brightness temperatures, and a refined orbit. With our Bayesian Atmospheric Radiative Transfer (BART) code, we characterized the atmosphere's temperature and pressure profile, and molecular abundances. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. J. Blecic holds a NASA Earth and Space Science Fellowship.

  15. Secondary eclipse observations and the atmosphere of exoplanet WASP-34b

    NASA Astrophysics Data System (ADS)

    Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Garland, Justin; Foster, Andrew S. D.; Blecic, Jasmina; Foster, AJ; Smalley, Barry

    2015-11-01

    WASP-34b is a short-period exoplanet with a mass of 0.59 ± 0.01 Jupiter masses orbiting a G5 star with a period of 4.3177 days and an eccentricity of 0.038 ± 0.012 (Smalley, 2010). We observed WASP-34b using the 3.6 and 4.5 μm channels of the Infrared Array Camera aboard the Spitzer Space Telescope in 2010 (Program 60003). We applied our Photometry for Orbits, Eclipses, and Transits (POET) code to present eclipse-depth measurements, estimates of infrared brightness temperatures, and a refined orbit. With our Bayesian Atmospheric Radiative Transfer (BART) code, we characterized the atmosphere's temperature and pressure profile, and molecular abundances. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. J. Blecic holds a NASA Earth and Space Science Fellowship.

  16. Patterns of protective factors in an intervention for the prevention of suicide and alcohol abuse with Yup'ik Alaska Native youth.

    PubMed

    Henry, David; Allen, James; Fok, Carlotta Ching Ting; Rasmus, Stacy; Charles, Bill

    2012-09-01

    Community-based participatory research (CBPR) with American Indian and Alaska Native communities creates distinct interventions, complicating cross-setting comparisons. The objective of this study is to develop a method for quantifying intervention exposure in CBPR interventions that differ in their forms across communities, permitting multi-site evaluation. Attendance data from 195 youth from three Yup'ik communities were coded for the specific protective factor exposure of each youth, based on information from the intervention manual. The coded attendance data were then submitted to latent class analysis to obtain participation patterns. Five patterns of exposure to protective factors were obtained: Internal, External, Limits, Community/family, and Low Protection. Patterns differed significantly by community and youth age. Standardizing interventions by the functions an intervention serves (protective factors promoted) instead of their forms or components (specific activities) can assist in refining CBPR interventions and evaluating effects in culturally distinct settings.

  17. Simulations of a Molecular Cloud experiment using CRASH

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Keiter, Paul; Vandervort, Robert; Drake, R. Paul; Shvarts, Dov

    2017-10-01

    Recent laboratory experiments explore molecular cloud radiation hydrodynamics. The experiment irradiates a gold foil with a laser producing x-rays to drive the implosion or explosion of a foam ball. The CRASH code, an Eulerian code with block-adaptive mesh refinement, multigroup diffusive radiation transport, and electron heat conduction developed at the University of Michigan to design and analyze high-energy-density experiments, is used to perform a parameter search in order to identify optically thick, optically thin and transition regimes suitable for these experiments. Specific design issues addressed by the simulations are the x-ray drive temperature, foam density, distance from the x-ray source to the ball, as well as other complicating issues such as the positioning of the stalk holding the foam ball. We present the results of this study and show ways the simulations helped improve the quality of the experiment. This work is funded by the LLNL under subcontract B614207 and NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0002956.

  18. High-fidelity simulations of blast loadings in urban environments using an overset meshing strategy

    NASA Astrophysics Data System (ADS)

    Wang, X.; Remotigue, M.; Arnoldus, Q.; Janus, M.; Luke, E.; Thompson, D.; Weed, R.; Bessette, G.

    2017-05-01

    Detailed blast propagation and evolution through multiple structures representing an urban environment were simulated using the code Loci/BLAST, which employs an overset meshing strategy. The use of overset meshes simplifies mesh generation by allowing meshes for individual component geometries to be generated independently. Detailed blast propagation and evolution through multiple structures, wave reflection and interaction between structures, and blast loadings on structures were simulated and analyzed. Predicted results showed good agreement with experimental data generated by the US Army Engineer Research and Development Center. Loci/BLAST results were also found to compare favorably to simulations obtained using the Second-Order Hydrodynamic Automatic Mesh Refinement Code (SHAMRC). The results obtained demonstrated that blast reflections in an urban setting significantly increased the blast loads on adjacent buildings. Correlations of computational results with experimental data yielded valuable insights into the physics of blast propagation, reflection, and interaction under an urban setting and verified the use of Loci/BLAST as a viable tool for urban blast analysis.

  19. Refining an intervention for students with emotional disturbance using qualitative parent and teacher data

    PubMed Central

    Nese, Rhonda N.T.; Palinkas, Lawrence A.; Ruppert, Traci

    2017-01-01

    Intensive supports are needed for students with emotional disturbance during high-risk transitions. Such interventions are most likely to be successful if they address stakeholder perspectives during the development process. This paper discusses qualitative findings from an iterative intervention development project designed to incorporate parent and teacher feedback early in the development process with applications relevant to the adoption of new programs. Using maximum variation purposive sampling, we solicited feedback from five foster/kinship parents, four biological parents and seven teachers to evaluate the feasibility and utility of the Students With Involved Families and Teachers (SWIFT) intervention in home and school settings. SWIFT provides youth and parent skills coaching in the home and school informed by weekly student behavioral progress monitoring. Participants completed semi-structured interviews that were transcribed and coded via an independent co-coding strategy. The findings provide support for school-based interventions involving family participation and lessons to ensure intervention success. PMID:28966422

  20. FY08 LDRD Final Report A New Method for Wave Propagation in Elastic Media LDRD Project Tracking Code: 05-ERD-079

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersson, A

    The LDRD project 'A New Method for Wave Propagation in Elastic Media' developed several improvements to the traditional finite difference technique for seismic wave propagation, including a summation-by-parts discretization which is provably stable for arbitrary heterogeneous materials, an accurate treatment of non-planar topography, local mesh refinement, and stable outflow boundary conditions. This project also implemented these techniques in a parallel open source computer code called WPP, and participated in several seismic modeling efforts to simulate ground motion due to earthquakes in Northern California. This research has been documented in six individual publications which are summarized in this report. Of thesemore » publications, four are published refereed journal articles, one is an accepted refereed journal article which has not yet been published, and one is a non-refereed software manual. The report concludes with a discussion of future research directions and exit plan.« less

  1. Precise and Efficient Static Array Bound Checking for Large Embedded C Programs

    NASA Technical Reports Server (NTRS)

    Venet, Arnaud

    2004-01-01

    In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.

  2. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    PubMed Central

    Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.

    2014-01-01

    The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083

  3. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  4. The HIV Prison Paradox: Agency and HIV-Positive Women's Experiences in Jail and Prison in Alabama.

    PubMed

    Sprague, Courtenay; Scanlon, Michael L; Radhakrishnan, Bharathi; Pantalone, David W

    2017-08-01

    Incarcerated women face significant barriers to achieve continuous HIV care. We employed a descriptive, exploratory design using qualitative methods and the theoretical construct of agency to investigate participants' self-reported experiences accessing HIV services in jail, in prison, and post-release in two Alabama cities. During January 2014, we conducted in-depth interviews with 25 formerly incarcerated HIV-positive women. Two researchers completed independent coding, producing preliminary codes from transcripts using content analysis. Themes were developed iteratively, verified, and refined. They encompassed (a) special rules for HIV-positive women: isolation, segregation, insults, food rationing, and forced disclosure; (b) absence of counseling following initial HIV diagnosis; and (c) HIV treatment impediments: delays, interruption, and denial. Participants deployed agentic strategies of accommodation, resistance, and care-seeking to navigate the social world of prison and HIV services. Findings illuminate the "HIV prison paradox": the chief opportunities that remain unexploited to engage and re-engage justice-involved women in the HIV care continuum.

  5. Biological and bionic hands: natural neural coding and artificial perception.

    PubMed

    Bensmaia, Sliman J

    2015-09-19

    The first decade and a half of the twenty-first century brought about two major innovations in neuroprosthetics: the development of anthropomorphic robotic limbs that replicate much of the function of a native human arm and the refinement of algorithms that decode intended movements from brain activity. However, skilled manipulation of objects requires somatosensory feedback, for which vision is a poor substitute. For upper-limb neuroprostheses to be clinically viable, they must therefore provide for the restoration of touch and proprioception. In this review, I discuss efforts to elicit meaningful tactile sensations through stimulation of neurons in somatosensory cortex. I focus on biomimetic approaches to sensory restoration, which leverage our current understanding about how information about grasped objects is encoded in the brain of intact individuals. I argue that not only can sensory neuroscience inform the development of sensory neuroprostheses, but also that the converse is true: stimulating the brain offers an exceptional opportunity to causally interrogate neural circuits and test hypotheses about natural neural coding.

  6. Orbit Refinement of Asteroids and Comets Using a Robotic Telescope Network

    NASA Astrophysics Data System (ADS)

    Lantz Caughey, Austin; Brown, Johnny; Puckett, Andrew W.; Hoette, Vivian L.; Johnson, Michael; McCarty, Cameron B.; Whitmore, Kevin; UNC-Chapel Hill SKYNET Team

    2016-01-01

    We report on a multi-semester project to refine the orbits of asteroids and comets in our Solar System. One of the newest fields of research for undergraduate Astrophysics students at Columbus State University is that of asteroid astrometry. By measuring the positions of an asteroid in a set of images, we can reduce the overall uncertainty in the accepted orbital parameters of that object. These measurements, using our WestRock Observatory (WRO) and several other telescopes around the world, are being published through the Minor Planet Center (MPC) and benefit the global community.Three different methods are used to obtain these observations. First, we use our own 24-inch telescope at WRO, located in at CSU's Coca-Cola Space Science Center in downtown Columbus, Georgia . Second, we have access to data from the 20-inch telescope at Stone Edge Observatory in El Verano, California. Finally, we may request images remotely using Skynet, an online worldwide network of robotic telescopes. Our primary and long-time collaborator on Skynet has been the "41-inch" reflecting telescope at Yerkes Observatory in Williams Bay, Wisconsin. Thus far, we have used these various telescopes to refine the orbits of more than 15 asteroids and comets. We have also confirmed the resulting reduction in orbit-model uncertainties using Monte Carlo simulations and orbit visualizations, using Find_Orb and OrbitMaster software, respectively.Before any observatory site can be used for official orbit refinement projects, it must first become a trusted source of astrometry data for the MPC. We have therefore obtained Observatory Codes not only for our own WestRock Observatory (W22), but also for 3 Skynet telescopes that we may use in the future: Dark Sky Observatory in Boone, North Carolina (W38) Hume Observatory in Santa Rosa, California (U54) and Athabasca University Geophysical Observatory in Athabasca, Alberta, Canada (U96).

  7. Exploring newly qualified doctors' workplace stressors: an interview study from Australia

    PubMed Central

    Tallentire, Victoria R; Smith, Samantha E; Facey, Adam D; Rotstein, Laila

    2017-01-01

    Purpose Postgraduate year 1 (PGY1) doctors suffer from high levels of psychological distress, yet the contributory factors are poorly understood. This study used an existing model of workplace stress to explore the elements most pertinent to PGY1 doctors. In turn, the data were used to amend and refine the conceptual model to better reflect the unique experiences of PGY1 doctors. Method Focus groups were undertaken with PGY1 doctors working at four different health services in Victoria, Australia. Transcripts were coded using Michie's model of workplace stress as the initial coding template. Remaining text was coded inductively and the supplementary codes were used to modify and amplify Michie's framework. Results There were 37 participants in total. Key themes included stressors intrinsic to the job, such as work overload and long hours, as well as those related to the context of work such as lack of role clarity and relationships with colleagues. The main modification to Michie's framework was the addition of the theme of uncertainty. This concept related to most of the pre-existing themes in complex ways, culminating in an overall sense of anxiety. Conclusions Michie's model of workplace stress can be effectively used to explore the stressors experienced by PGY1 doctors. Pervasive uncertainty may help to explain the high levels of psychological morbidity in this group. While some uncertainty will always remain, the medical education community must seek ways to improve role clarity and promote mutual respect. PMID:28801411

  8. Refined mapping of autoimmune disease associated genetic variants with gene expression suggests an important role for non-coding RNAs.

    PubMed

    Ricaño-Ponce, Isis; Zhernakova, Daria V; Deelen, Patrick; Luo, Oscar; Li, Xingwang; Isaacs, Aaron; Karjalainen, Juha; Di Tommaso, Jennifer; Borek, Zuzanna Agnieszka; Zorro, Maria M; Gutierrez-Achury, Javier; Uitterlinden, Andre G; Hofman, Albert; van Meurs, Joyce; Netea, Mihai G; Jonkers, Iris H; Withoff, Sebo; van Duijn, Cornelia M; Li, Yang; Ruan, Yijun; Franke, Lude; Wijmenga, Cisca; Kumar, Vinod

    2016-04-01

    Genome-wide association and fine-mapping studies in 14 autoimmune diseases (AID) have implicated more than 250 loci in one or more of these diseases. As more than 90% of AID-associated SNPs are intergenic or intronic, pinpointing the causal genes is challenging. We performed a systematic analysis to link 460 SNPs that are associated with 14 AID to causal genes using transcriptomic data from 629 blood samples. We were able to link 71 (39%) of the AID-SNPs to two or more nearby genes, providing evidence that for part of the AID loci multiple causal genes exist. While 54 of the AID loci are shared by one or more AID, 17% of them do not share candidate causal genes. In addition to finding novel genes such as ULK3, we also implicate novel disease mechanisms and pathways like autophagy in celiac disease pathogenesis. Furthermore, 42 of the AID SNPs specifically affected the expression of 53 non-coding RNA genes. To further understand how the non-coding genome contributes to AID, the SNPs were linked to functional regulatory elements, which suggest a model where AID genes are regulated by network of chromatin looping/non-coding RNAs interactions. The looping model also explains how a causal candidate gene is not necessarily the gene closest to the AID SNP, which was the case in nearly 50% of cases. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Structure of the solar photosphere studied from the radiation hydrodynamics code ANTARES.

    PubMed

    Leitner, P; Lemmerer, B; Hanslmeier, A; Zaqarashvili, T; Veronig, A; Grimm-Strele, H; Muthsam, H J

    2017-01-01

    The ANTARES radiation hydrodynamics code is capable of simulating the solar granulation in detail unequaled by direct observation. We introduce a state-of-the-art numerical tool to the solar physics community and demonstrate its applicability to model the solar granulation. The code is based on the weighted essentially non-oscillatory finite volume method and by its implementation of local mesh refinement is also capable of simulating turbulent fluids. While the ANTARES code already provides promising insights into small-scale dynamical processes occurring in the quiet-Sun photosphere, it will soon be capable of modeling the latter in the scope of radiation magnetohydrodynamics. In this first preliminary study we focus on the vertical photospheric stratification by examining a 3-D model photosphere with an evolution time much larger than the dynamical timescales of the solar granulation and of particular large horizontal extent corresponding to [Formula: see text] on the solar surface to smooth out horizontal spatial inhomogeneities separately for up- and downflows. The highly resolved Cartesian grid thereby covers [Formula: see text] of the upper convection zone and the adjacent photosphere. Correlation analysis, both local and two-point, provides a suitable means to probe the photospheric structure and thereby to identify several layers of characteristic dynamics: The thermal convection zone is found to reach some ten kilometers above the solar surface, while convectively overshooting gas penetrates even higher into the low photosphere. An [Formula: see text] wide transition layer separates the convective from the oscillatory layers in the higher photosphere.

  10. Structure of the solar photosphere studied from the radiation hydrodynamics code ANTARES

    NASA Astrophysics Data System (ADS)

    Leitner, P.; Lemmerer, B.; Hanslmeier, A.; Zaqarashvili, T.; Veronig, A.; Grimm-Strele, H.; Muthsam, H. J.

    2017-09-01

    The ANTARES radiation hydrodynamics code is capable of simulating the solar granulation in detail unequaled by direct observation. We introduce a state-of-the-art numerical tool to the solar physics community and demonstrate its applicability to model the solar granulation. The code is based on the weighted essentially non-oscillatory finite volume method and by its implementation of local mesh refinement is also capable of simulating turbulent fluids. While the ANTARES code already provides promising insights into small-scale dynamical processes occurring in the quiet-Sun photosphere, it will soon be capable of modeling the latter in the scope of radiation magnetohydrodynamics. In this first preliminary study we focus on the vertical photospheric stratification by examining a 3-D model photosphere with an evolution time much larger than the dynamical timescales of the solar granulation and of particular large horizontal extent corresponding to 25''×25'' on the solar surface to smooth out horizontal spatial inhomogeneities separately for up- and downflows. The highly resolved Cartesian grid thereby covers ˜4 Mm of the upper convection zone and the adjacent photosphere. Correlation analysis, both local and two-point, provides a suitable means to probe the photospheric structure and thereby to identify several layers of characteristic dynamics: The thermal convection zone is found to reach some ten kilometers above the solar surface, while convectively overshooting gas penetrates even higher into the low photosphere. An ≈145 km wide transition layer separates the convective from the oscillatory layers in the higher photosphere.

  11. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  12. Function Model for Community Health Service Information

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong

    In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.

  13. Effects of geometry on blast-induced loadings

    NASA Astrophysics Data System (ADS)

    Moore, Christopher Dyer

    Simulations of blasts in an urban environment were performed using Loci/BLAST, a full-featured fluid dynamics simulation code, and analyzed. A two-structure urban environment blast case was used to perform a mesh refinement study. Results show that mesh spacing on and around the structure must be 12.5 cm or less to resolve fluid dynamic features sufficiently to yield accurate results. The effects of confinement were illustrated by analyzing a blast initiated from the same location with and without the presence of a neighboring structure. Analysis of extreme pressures and impulses on structures showed that confinement can increase blast loading by more than 200 percent.

  14. Turbulence Modeling: Progress and Future Outlook

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.; Huang, George P.

    1996-01-01

    Progress in the development of the hierarchy of turbulence models for Reynolds-averaged Navier-Stokes codes used in aerodynamic applications is reviewed. Steady progress is demonstrated, but transfer of the modeling technology has not kept pace with the development and demands of the computational fluid dynamics (CFD) tools. An examination of the process of model development leads to recommendations for a mid-course correction involving close coordination between modelers, CFD developers, and application engineers. In instances where the old process is changed and cooperation enhanced, timely transfer is realized. A turbulence modeling information database is proposed to refine the process and open it to greater participation among modeling and CFD practitioners.

  15. Generic control software connecting astronomical instruments to the reflective memory data recording system of VLTI - bossvlti

    NASA Astrophysics Data System (ADS)

    Pozna, E.; Ramirez, A.; Mérand, A.; Mueller, A.; Abuter, R.; Frahm, R.; Morel, S.; Schmid, C.; Duc, T. Phan; Delplancke-Ströbele, F.

    2014-07-01

    The quality of data obtained by VLTI instruments may be refined by analyzing the continuous data supplied by the Reflective Memory Network (RMN). Based on 5 years experience providing VLTI instruments (PACMAN, AMBER, MIDI) with RMN data, the procedure has been generalized to make the synchronization with observation trouble-free. The present software interface saves not only months of efforts for each instrument but also provides the benefits of software frameworks. Recent applications (GRAVITY, MATISSE) supply feedback for the software to evolve. The paper highlights the way common features been identified to be able to offer reusable code in due course.

  16. Modeling unsaturated zone flow and runoff processes by integrating MODFLOW-LGR and VSF, and creating the new CFL package

    USGS Publications Warehouse

    Borsia, I.; Rossetto, R.; Schifani, C.; Hill, Mary C.

    2013-01-01

    In this paper two modifications to the MODFLOW code are presented. One concerns an extension of Local Grid Refinement (LGR) to Variable Saturated Flow process (VSF) capability. This modification allows the user to solve the 3D Richards’ equation only in selected parts of the model domain. The second modification introduces a new package, named CFL (Cascading Flow), which improves the computation of overland flow when ground surface saturation is simulated using either VSF or the Unsaturated Zone Flow (UZF) package. The modeling concepts are presented and demonstrated. Programmer documentation is included in appendices.

  17. Toward Real-Time Infoveillance of Twitter Health Messages.

    PubMed

    Colditz, Jason B; Chu, Kar-Hai; Emery, Sherry L; Larkin, Chandler R; James, A Everette; Welling, Joel; Primack, Brian A

    2018-06-21

    There is growing interest in conducting public health research using data from social media. In particular, Twitter "infoveillance" has demonstrated utility across health contexts. However, rigorous and reproducible methodologies for using Twitter data in public health are not yet well articulated, particularly those related to content analysis, which is a highly popular approach. In 2014, we gathered an interdisciplinary team of health science researchers, computer scientists, and methodologists to begin implementing an open-source framework for real-time infoveillance of Twitter health messages (RITHM). Through this process, we documented common challenges and novel solutions to inform future work in real-time Twitter data collection and subsequent human coding. The RITHM framework allows researchers and practitioners to use well-planned and reproducible processes in retrieving, storing, filtering, subsampling, and formatting data for health topics of interest. Further considerations for human coding of Twitter data include coder selection and training, data representation, codebook development and refinement, and monitoring coding accuracy and productivity. We illustrate methodological considerations through practical examples from formative work related to hookah tobacco smoking, and we reference essential methods literature related to understanding and using Twitter data. (Am J Public Health. Published online ahead of print June 21, 2018: e1-e6. doi:10.2105/AJPH.2018.304497).

  18. Evaluation of the heat transfer module (FAHT) of Failure Analysis Nonlinear Thermal And Structural Integrated Code (FANTASTIC)

    NASA Technical Reports Server (NTRS)

    Keyhani, Majid

    1989-01-01

    The heat transfer module of FANTASTIC Code (FAHT) is studied and evaluated to the extend possible during the ten weeks duration of this project. A brief background of the previous studies is given and the governing equations as modeled in FAHT are discussed. FAHT's capabilities and limitations based on these equations and its coding methodology are explained in detail. It is established that with improper choice of element size and time step FAHT's temperature field prediction at some nodes will be below the initial condition. The source of this unrealistic temperature prediction is identified and a procedure is proposed for avoiding this phenomenon. It is further shown that the proposed procedure will converge to an accurate prediction upon mesh refinement. Unfortunately due to lack of time FAHT's ability to accurately account for pyrolysis and surface ablation has not been verified. Therefore, at the present time it can be stated with confidence that FAHT can accurately predict the temperature field for a transient multi-dimensional, orthotropic material with directional dependence, variable property, with nonlinear boundary condition. Such a prediction will provide an upper limit for the temperature field in an ablating decomposing nozzle liner. The pore pressure field, however, will not be known.

  19. Therapists' Perspective on Virtual Reality Training in Patients after Stroke: A Qualitative Study Reporting Focus Group Results from Three Hospitals.

    PubMed

    Schmid, Ludwig; Glässel, Andrea; Schuster-Amft, Corina

    2016-01-01

    Background . During the past decade, virtual reality (VR) has become a new component in the treatment of patients after stroke. Therefore aims of the study were (a) to get an insight into experiences and expectations of physiotherapists and occupational therapists in using a VR training system and (b) to investigate relevant facilitators, barriers, and risks for implementing VR training in clinical practice. Methods . Three focus groups were conducted with occupational therapists and physiotherapists, specialised in rehabilitation of patients after stroke. All data were audio-recorded and transcribed verbatim. The study was analysed based on a phenomenological approach using qualitative content analysis. Results . After code refinements, a total number of 1289 codes emerged out of 1626 statements. Intercoder reliability increased from 53% to 91% until the last focus group. The final coding scheme included categories on a four-level hierarchy: first-level categories are (a) therapists and VR, (b) VR device, (c) patients and VR, and (d) future prospects and potential of VR developments. Conclusions . Results indicate that interprofessional collaboration is needed to develop future VR technology and to devise VR implementation strategies in clinical practice. In principal, VR technology devices were seen as supportive for a general health service model.

  20. Therapists' Perspective on Virtual Reality Training in Patients after Stroke: A Qualitative Study Reporting Focus Group Results from Three Hospitals

    PubMed Central

    Schmid, Ludwig; Glässel, Andrea

    2016-01-01

    Background. During the past decade, virtual reality (VR) has become a new component in the treatment of patients after stroke. Therefore aims of the study were (a) to get an insight into experiences and expectations of physiotherapists and occupational therapists in using a VR training system and (b) to investigate relevant facilitators, barriers, and risks for implementing VR training in clinical practice. Methods. Three focus groups were conducted with occupational therapists and physiotherapists, specialised in rehabilitation of patients after stroke. All data were audio-recorded and transcribed verbatim. The study was analysed based on a phenomenological approach using qualitative content analysis. Results. After code refinements, a total number of 1289 codes emerged out of 1626 statements. Intercoder reliability increased from 53% to 91% until the last focus group. The final coding scheme included categories on a four-level hierarchy: first-level categories are (a) therapists and VR, (b) VR device, (c) patients and VR, and (d) future prospects and potential of VR developments. Conclusions. Results indicate that interprofessional collaboration is needed to develop future VR technology and to devise VR implementation strategies in clinical practice. In principal, VR technology devices were seen as supportive for a general health service model. PMID:28058130

  1. Discovery of rare protein-coding genes in model methylotroph Methylobacterium extorquens AM1.

    PubMed

    Kumar, Dhirendra; Mondal, Anupam Kumar; Yadav, Amit Kumar; Dash, Debasis

    2014-12-01

    Proteogenomics involves the use of MS to refine annotation of protein-coding genes and discover genes in a genome. We carried out comprehensive proteogenomic analysis of Methylobacterium extorquens AM1 (ME-AM1) from publicly available proteomics data with a motive to improve annotation for methylotrophs; organisms capable of surviving in reduced carbon compounds such as methanol. Besides identifying 2482(50%) proteins, 29 new genes were discovered and 66 annotated gene models were revised in ME-AM1 genome. One such novel gene is identified with 75 peptides, lacks homolog in other methylobacteria but has glycosyl transferase and lipopolysaccharide biosynthesis protein domains, indicating its potential role in outer membrane synthesis. Many novel genes are present only in ME-AM1 among methylobacteria. Distant homologs of these genes in unrelated taxonomic classes and low GC-content of few genes suggest lateral gene transfer as a potential mode of their origin. Annotations of methylotrophy related genes were also improved by the discovery of a short gene in methylotrophy gene island and redefining a gene important for pyrroquinoline quinone synthesis, essential for methylotrophy. The combined use of proteogenomics and rigorous bioinformatics analysis greatly enhanced the annotation of protein-coding genes in model methylotroph ME-AM1 genome. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Efficacy of physical activity interventions in post-natal populations: systematic review, meta-analysis and content coding of behaviour change techniques.

    PubMed

    Gilinsky, Alyssa Sara; Dale, Hannah; Robinson, Clare; Hughes, Adrienne R; McInnes, Rhona; Lavallee, David

    2015-01-01

    This systematic review and meta-analysis reports the efficacy of post-natal physical activity change interventions with content coding of behaviour change techniques (BCTs). Electronic databases (MEDLINE, CINAHL and PsychINFO) were searched for interventions published from January 1980 to July 2013. Inclusion criteria were: (i) interventions including ≥1 BCT designed to change physical activity behaviour, (ii) studies reporting ≥1 physical activity outcome, (iii) interventions commencing later than four weeks after childbirth and (iv) studies including participants who had given birth within the last year. Controlled trials were included in the meta-analysis. Interventions were coded using the 40-item Coventry, Aberdeen & London - Refined (CALO-RE) taxonomy of BCTs and study quality assessment was conducted using Cochrane criteria. Twenty studies were included in the review (meta-analysis: n = 14). Seven were interventions conducted with healthy inactive post-natal women. Nine were post-natal weight management studies. Two studies included women with post-natal depression. Two studies focused on improving general well-being. Studies in healthy populations but not for weight management successfully changed physical activity. Interventions increased frequency but not volume of physical activity or walking behaviour. Efficacious interventions always included the BCTs 'goal setting (behaviour)' and 'prompt self-monitoring of behaviour'.

  3. A simple model for molecular hydrogen chemistry coupled to radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Nickerson, Sarah; Teyssier, Romain; Rosdahl, Joakim

    2018-06-01

    We introduce non-equilibrium molecular hydrogen chemistry into the radiation-hydrodynamics code RAMSES-RT. This is an adaptive mesh refinement grid code with radiation hydrodynamics that couples the thermal chemistry of hydrogen and helium to moment-based radiative transfer with the Eddington tensor closure model. The H2 physics that we include are formation on dust grains, gas phase formation, formation by three-body collisions, collisional destruction, photodissociation, photoionisation, cosmic ray ionisation and self-shielding. In particular, we implement the first model for H2 self-shielding that is tied locally to moment-based radiative transfer by enhancing photo-destruction. This self-shielding from Lyman-Werner line overlap is critical to H2 formation and gas cooling. We can now track the non-equilibrium evolution of molecular, atomic, and ionised hydrogen species with their corresponding dissociating and ionising photon groups. Over a series of tests we show that our model works well compared to specialised photodissociation region codes. We successfully reproduce the transition depth between molecular and atomic hydrogen, molecular cooling of the gas, and a realistic Strömgren sphere embedded in a molecular medium. In this paper we focus on test cases to demonstrate the validity of our model on small scales. Our ultimate goal is to implement this in large-scale galactic simulations.

  4. Multipacting studies in elliptic SRF cavities

    NASA Astrophysics Data System (ADS)

    Prakash, Ram; Jana, Arup Ratan; Kumar, Vinit

    2017-09-01

    Multipacting is a resonant process, where the number of unwanted electrons resulting from a parasitic discharge rapidly grows to a larger value at some specific locations in a radio-frequency cavity. This results in a degradation of the cavity performance indicators (e.g. the quality factor Q and the maximum achievable accelerating gradient Eacc), and in the case of a superconducting radiofrequency (SRF) cavity, it leads to a quenching of superconductivity. Numerical simulations are essential to pre-empt the possibility of multipacting in SRF cavities, such that its design can be suitably refined to avoid this performance limiting phenomenon. Readily available computer codes (e.g.FishPact, MultiPac,CST-PICetc.) are widely used to simulate the phenomenon of multipacting in such cases. Most of the contemporary two dimensional (2D) codes such as FishPact, MultiPacetc. are unable to detect the multipacting in elliptic cavities because they use a simplistic secondary emission model, where it is assumed that all the secondary electrons are emitted with same energy. Some three-dimensional (3D) codes such as CST-PIC, which use a more realistic secondary emission model (Furman model) by following a probability distribution for the emission energy of secondary electrons, are able to correctly predict the occurrence of multipacting. These 3D codes however require large data handling and are slower than the 2D codes. In this paper, we report a detailed analysis of the multipacting phenomenon in elliptic SRF cavities and development of a 2D code to numerically simulate this phenomenon by employing the Furman model to simulate the secondary emission process. Since our code is 2D, it is faster than the 3D codes. It is however as accurate as the contemporary 3D codes since it uses the Furman model for secondary emission. We have also explored the possibility to further simplify the Furman model, which enables us to quickly estimate the growth rate of multipacting without performing any multi-particle simulation. This methodology has been employed along with computer code for the detailed analysis of multipacting in βg = 0 . 61 and βg = 0 . 9, 650 MHz elliptic SRF cavities that we have recently designed for the medium and high energy section of the proposed Indian Spallation Neutron Source (ISNS) project.

  5. Assessing Occupational Exposure to Chemicals in an International Epidemiological Study of Brain Tumours

    PubMed Central

    van Tongeren, Martie

    2013-01-01

    The INTEROCC project is a multi-centre case–control study investigating the risk of developing brain cancer due to occupational chemical and electromagnetic field exposures. To estimate chemical exposures, the Finnish Job Exposure Matrix (FINJEM) was modified to improve its performance in the INTEROCC study and to address some of its limitations, resulting in the development of the INTEROCC JEM. An international team of occupational hygienists developed a crosswalk between the Finnish occupational codes used in FINJEM and the International Standard Classification of Occupations 1968 (ISCO68). For ISCO68 codes linked to multiple Finnish codes, weighted means of the exposure estimates were calculated. Similarly, multiple ISCO68 codes linked to a single Finnish code with evidence of heterogeneous exposure were refined. One of the key time periods in FINJEM (1960–1984) was split into two periods (1960–1974 and 1975–1984). Benzene exposure estimates in early periods were modified upwards. The internal consistency of hydrocarbon exposures and exposures to engine exhaust fumes was improved. Finally, exposure to polycyclic aromatic hydrocarbon and benzo(a)pyrene was modified to include the contribution from second-hand smoke. The crosswalk ensured that the FINJEM exposure estimates could be applied to the INTEROCC study subjects. The modifications generally resulted in an increased prevalence of exposure to chemical agents. This increased prevalence of exposure was not restricted to the lowest categories of cumulative exposure, but was seen across all levels for some agents. Although this work has produced a JEM with important improvements compared to FINJEM, further improvements are possible with the expansion of agents and additional external data. PMID:23467593

  6. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    PubMed Central

    Clark, Kevin B.

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987

  7. Quantifying the physical demands of collision sports: does microsensor technology measure what it claims to measure?

    PubMed

    Gabbett, Tim J

    2013-08-01

    The physical demands of rugby league, rugby union, and American football are significantly increased through the large number of collisions players are required to perform during match play. Because of the labor-intensive nature of coding collisions from video recordings, manufacturers of wearable microsensor (e.g., global positioning system [GPS]) units have refined the technology to automatically detect collisions, with several sport scientists attempting to use these microsensors to quantify the physical demands of collision sports. However, a question remains over the validity of these microtechnology units to quantify the contact demands of collision sports. Indeed, recent evidence has shown significant differences in the number of "impacts" recorded by microtechnology units (GPSports) and the actual number of collisions coded from video. However, a separate study investigated the validity of a different microtechnology unit (minimaxX; Catapult Sports) that included GPS and triaxial accelerometers, and also a gyroscope and magnetometer, to quantify collisions. Collisions detected by the minimaxX unit were compared with video-based coding of the actual events. No significant differences were detected in the number of mild, moderate, and heavy collisions detected via the minimaxX units and those coded from video recordings of the actual event. Furthermore, a strong correlation (r = 0.96, p < 0.01) was observed between collisions recorded via the minimaxX units and those coded from video recordings of the event. These findings demonstrate that only one commercially available and wearable microtechnology unit (minimaxX) can be considered capable of offering a valid method of quantifying the contact loads that typically occur in collision sports. Until such validation research is completed, sport scientists should be circumspect of the ability of other units to perform similar functions.

  8. Mining peripheral arterial disease cases from narrative clinical notes using natural language processing.

    PubMed

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M

    2017-06-01

    Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Collisionless stellar hydrodynamics as an efficient alternative to N-body methods

    NASA Astrophysics Data System (ADS)

    Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard

    2013-01-01

    The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.

  10. Study of the adaptive refinement on an open source 2D shallow-water flow solver using quadtree grid for flash flood simulations.

    NASA Astrophysics Data System (ADS)

    Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.

    2015-12-01

    The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/

  11. Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1

    NASA Astrophysics Data System (ADS)

    Rathna Kumar, S.

    2017-09-01

    In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.

  12. Refined Composite Multiscale Dispersion Entropy and its Application to Biomedical Signals.

    PubMed

    Azami, Hamed; Rostaghi, Mostafa; Abasolo, Daniel; Escudero, Javier

    2017-12-01

    We propose a novel complexity measure to overcome the deficiencies of the widespread and powerful multiscale entropy (MSE), including, MSE values may be undefined for short signals, and MSE is slow for real-time applications. We introduce multiscale dispersion entropy (DisEn-MDE) as a very fast and powerful method to quantify the complexity of signals. MDE is based on our recently developed DisEn, which has a computation cost of O(N), compared with O(N 2 ) for sample entropy used in MSE. We also propose the refined composite MDE (RCMDE) to improve the stability of MDE. We evaluate MDE, RCMDE, and refined composite MSE (RCMSE) on synthetic signals and three biomedical datasets. The MDE, RCMDE, and RCMSE methods show similar results, although the MDE and RCMDE are faster, lead to more stable results, and discriminate different types of physiological signals better than MSE and RCMSE. For noisy short and long time series, MDE and RCMDE are noticeably more stable than MSE and RCMSE, respectively. For short signals, MDE and RCMDE, unlike MSE and RCMSE, do not lead to undefined values. The proposed MDE and RCMDE are significantly faster than MSE and RCMSE, especially for long signals, and lead to larger differences between physiological conditions known to alter the complexity of the physiological recordings. MDE and RCMDE are expected to be useful for the analysis of physiological signals thanks to their ability to distinguish different types of dynamics. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/1982.

  13. Cosmological Simulations with Molecular Astrochemistry: Water in the Early Universe

    NASA Astrophysics Data System (ADS)

    Wiggins, Brandon K.; Smidt, Joseph

    2018-01-01

    Water is required for the rise of life as we know it throughout the universe, but its origin and the circumstances of its first appearance remain a mystery. The abundance of deuterated water in solar system bodies cannot be explained if all the water in the solar system were created in the protoplanetary disk (Cleeves et al. 2014), suggesting that as much of half of Earth’s water predates the Sun. Water has been observed as early as one sixth the current universe’s age in MG J0414+0534 (Imprellizzeri et al. 2008). It was recently shown that water could, in principle, appear in hot halos barely enriched with heavy elements such as oxygen and carbon (Bialy et al. 2015). So far, no self-consistent calculation of cosmology physics carried out in line with a large chemical reaction network has been carried out to study the first sites of water formation in the universe. We present initial results the first such series of cosmological calculations with a 26 species low metallicity molecular chemical reaction network with Enzo (Bryan et al. 2014) to understand the role of hydrodynamics and radiative feedback on molecule formation in the early universe and to shed light on the cosmological history of this life-giving substance.

  14. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  15. Liquefied Bleed for Stability and Efficiency of High Speed Inlets

    NASA Technical Reports Server (NTRS)

    Saunders, J. David; Davis, David; Barsi, Stephen J.; Deans, Matthew C.; Weir, Lois J.; Sanders, Bobby W.

    2014-01-01

    A mission analysis code was developed to perform a trade study on the effectiveness of liquefying bleed for the inlet of the first stage of a TSTO vehicle. By liquefying bleed, the vehicle weight (TOGW) could be reduced by 7 to 23%. Numerous simplifying assumptions were made and lessons were learned. Increased accuracy in future analyses can be achieved by: Including a higher fidelity model to capture the effect of rescaling (variable vehicle TOGW). Refining specific thrust and impulse models ( T m a and Isp) to preserve fuel-to-air ratio. Implementing LH2 for T m a and Isp. Correlating baseline design to other mission analyses and correcting vehicle design elements. Implementing angle-of-attack effects on inlet characteristics. Refining aerodynamic performance (to improve L/D ratio at higher Mach numbers). Examining the benefit with partial cooling or densification of the bleed air stream. Incorporating higher fidelity weight estimates for the liquefied bleed system (heat exchange and liquid storage versus bleed duct weights) could be added when more fully developed. Adding trim drag or 6-degree-of-freedom trajectory analysis for higher fidelity. Investigating vehicle optimization for each of the bleed configurations.

  16. Verification and Validation of a Coordinate Transformation Method in Axisymmetric Transient Magnetics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.

    We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less

  17. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  18. Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers

    NASA Astrophysics Data System (ADS)

    Sendersky, Dmitry

    2000-10-01

    The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.

  19. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  20. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  1. Assessment of an Euler-Interacting Boundary Layer Method Using High Reynolds Number Transonic Flight Data

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Maddalon, Dal V.

    1998-01-01

    Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.

  2. Stabilization of numerical interchange in spectral-element magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sovinec, C. R.

    In this study, auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C 0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate ofmore » the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. Lastly, the projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code [C. R. Sovinec, et al., J. Comput. Phys. 195 (2004) 355-386], provided that the projections introduce numerical dissipation.« less

  3. Stabilization of numerical interchange in spectral-element magnetohydrodynamics

    DOE PAGES

    Sovinec, C. R.

    2016-05-10

    In this study, auxiliary numerical projections of the divergence of flow velocity and vorticity parallel to magnetic field are developed and tested for the purpose of suppressing unphysical interchange instability in magnetohydrodynamic simulations. The numerical instability arises with equal-order C 0 finite- and spectral-element expansions of the flow velocity, magnetic field, and pressure and is sensitive to behavior at the limit of resolution. The auxiliary projections are motivated by physical field-line bending, and coercive responses to the projections are added to the flow-velocity equation. Their incomplete expansions are limited to the highest-order orthogonal polynomial in at least one coordinate ofmore » the spectral elements. Cylindrical eigenmode computations show that the projections induce convergence from the stable side with first-order ideal-MHD equations during h-refinement and p-refinement. Hyperbolic and parabolic projections and responses are compared, together with different methods for avoiding magnetic divergence error. Lastly, the projections are also shown to be effective in linear and nonlinear time-dependent computations with the NIMROD code [C. R. Sovinec, et al., J. Comput. Phys. 195 (2004) 355-386], provided that the projections introduce numerical dissipation.« less

  4. Progress in Computational Simulation of Earthquakes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  5. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  6. Managing perceived conflicts of interest while ensuring the continued innovation of medical technology.

    PubMed

    Van Haute, Andrew

    2011-09-01

    If it were not for the ongoing collaboration between vascular surgeons and the medical technology industry, many of these advanced treatments used every day in vascular interventional surgery would not exist. The flip side of this coin is that these vital relationships create multiple roles for surgeons and must be appropriately managed. The dynamic process of innovation, along with factors such as product delivery technique refinement, education, testing and clinical trials, and product support, all make it necessary for ongoing and close collaboration between surgeons and the device industry. This unique relationship sometimes leads to the perception of conflicts of interest for physicians, in part because the competing pressures from the multiple, overlapping roles as clinician/caregiver/investigator/innovator/customer are significant. To address this issue, the Advanced Medical Technology Association (AdvaMed), the nation's largest medical technology association representing medical device and diagnostics companies, developed a Code of Ethics to guide medical technology companies in their interactions with health care professionals. First introduced in 1993, the AdvaMed Code strongly encourages both industry and physicians to commit to openness and high ethical standards in the conduct of their business interactions. The AdvaMed Code addresses many of the types of interactions that can occur between companies and health care professionals, including training, consulting agreements, the provision of demonstration and evaluation units, and charitable donations. By following the Code, companies send a strong message that treatment decisions must always be based on the best interest of the patient. Copyright © 2011. Published by Mosby, Inc.

  7. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  8. Computational logic: its origins and applications.

    PubMed

    Paulson, Lawrence C

    2018-02-01

    Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the 'logic for computable functions (LCF) approach' pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users' code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself.

  9. Numerical Analysis of a Rotating Detonation Engine in the Relative Reference Frame

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2014-01-01

    A two-dimensional, computational fluid dynamic (CFD) simulation of a semi-idealized rotating detonation engine (RDE) is described. The simulation operates in the detonation frame of reference and utilizes a relatively coarse grid such that only the essential primary flow field structure is captured. This construction yields rapidly converging, steady solutions. Results from the simulation are compared to those from a more complex and refined code, and found to be in reasonable agreement. The performance impacts of several RDE design parameters are then examined. Finally, for a particular RDE configuration, it is found that direct performance comparison can be made with a straight-tube pulse detonation engine (PDE). Results show that they are essentially equivalent.

  10. Online interactive analysis of protein structure ensembles with Bio3D-web.

    PubMed

    Skjærven, Lars; Jariwala, Shashank; Yao, Xin-Qiu; Grant, Barry J

    2016-11-15

    Bio3D-web is an online application for analyzing the sequence, structure and conformational heterogeneity of protein families. Major functionality is provided for identifying protein structure sets for analysis, their alignment and refined structure superposition, sequence and structure conservation analysis, mapping and clustering of conformations and the quantitative comparison of their predicted structural dynamics. Bio3D-web is based on the Bio3D and Shiny R packages. All major browsers are supported and full source code is available under a GPL2 license from http://thegrantlab.org/bio3d-web CONTACT: bjgrant@umich.edu or lars.skjarven@uib.no. © The Author 2016. Published by Oxford University Press.

  11. P80 SRM low torque flex-seal development - thermal and chemical modeling of molding process

    NASA Astrophysics Data System (ADS)

    Descamps, C.; Gautronneau, E.; Rousseau, G.; Daurat, M.

    2009-09-01

    The development of the flex-seal component of the P80 nozzle gave the opportunity to set up new design and manufacturing process methods. Due to the short development lead time required by VEGA program, the usual manufacturing iterative tests work flow, which is usually time consuming, had to be enhanced in order to use a more predictive approach. A newly refined rubber vulcanization description was built up and identified on laboratory samples. This chemical model was implemented in a thermal analysis code. The complete model successfully supports the manufacturing processes. These activities were conducted with the support of ESA/CNES Research & Technologies and DGA (General Delegation for Armament).

  12. Investigation of Cloud Properties and Atmospheric Profiles with Modis

    NASA Technical Reports Server (NTRS)

    Menzel, Paul; Ackerman, Steve; Moeller, Chris; Gumley, Liam; Strabala, Kathy; Frey, Richard; Prins, Elaine; Laporte, Dan; Wolf, Walter

    1997-01-01

    A major milestone was accomplished with the delivery of all five University of Wisconsin MODIS Level 2 science production software packages to the Science Data Support Team (SDST) for integration. These deliveries were the culmination of months of design and testing, with most of the work focused on tasks peripheral to the actual science contained in the code. LTW hosted a MODIS infrared calibration workshop in September. Considerable progress has been made by MCST, with help from LTW, in refining the calibration algorithm, and in identifying and characterization outstanding problems. Work continues on characterizing the effects of non-blackbody earth surfaces on atmospheric profile retrievals and modeling radiative transfer through cirrus clouds.

  13. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  14. Accurate solutions for transonic viscous flow over finite wings

    NASA Technical Reports Server (NTRS)

    Vatsa, V. N.

    1986-01-01

    An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viktor K. Decyk

    The UCLA work on this grant was to design and help implement an object-oriented version of the GTC code, which is written in Fortran90. The GTC code is the main global gyrokinetic code used in this project, and over the years multiple, incompatible versions have evolved. The reason for this effort is to allow multiple authors to work together on GTC and to simplify future enhancements to GTC. The effort was designed to proceed incrementally. Initially, an upper layer of classes (derived types and methods) was implemented which called the original GTC code 'under the hood.' The derived types pointedmore » to data in the original GTC code, and the methods called the original GTC subroutines. The original GTC code was modified only very slightly. This allowed one to define (and refine) a set of classes which described the important features of the GTC code in a new, more abstract way, with a minimum of implementation. Furthermore, classes could be added one at a time, and at the end of the each day, the code continued to work correctly. This work was done in close collaboration with Y. Nishimura from UC Irvine and Stefan Ethier from PPPL. Ten classes were ultimately defined and implemented: gyrokinetic and drift kinetic particles, scalar and vector fields, a mesh, jacobian, FLR, equilibrium, interpolation, and particles species descriptors. In the second state of this development, some of the scaffolding was removed. The constructors in the class objects now allocated the data and the array data in the original GTC code was removed. This isolated the components and now allowed multiple instantiations of the objects to be created, in particular, multiple ion species. Again, the work was done incrementally, one class at a time, so that the code was always working properly. This work was done in close collaboration with Y. Nishimura and W. Zhang from UC Irvine and Stefan Ethier from PPPL. The third stage of this work was to integrate the capabilities of the various versions of the GTC code into one flexible and extensible version. To do this, we developed a methodology to implement Design Patterns in Fortran90. Design Patterns are abstract solutions to generic programming problems, which allow one to handle increased complexity. This work was done in collaboration with Henry Gardner, a computer scientist (and former plasma physicist) from the Australian National University. As an example, the Strategy Pattern is being used in GTC to support multiple solvers. This new code is currently being used in the study of energetic particles. A document describing the evolution of the GTC code to this new object-oriented version is available to users of GTC.« less

  16. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.

  17. Principles of protein folding--a perspective from simple exact models.

    PubMed Central

    Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.

    1995-01-01

    General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459

  18. Mitochondrial DNA haplogroup phylogeny of the dog: Proposal for a cladistic nomenclature.

    PubMed

    Fregel, Rosa; Suárez, Nicolás M; Betancor, Eva; González, Ana M; Cabrera, Vicente M; Pestano, José

    2015-05-01

    Canis lupus familiaris mitochondrial DNA analysis has increased in recent years, not only for the purpose of deciphering dog domestication but also for forensic genetic studies or breed characterization. The resultant accumulation of data has increased the need for a normalized and phylogenetic-based nomenclature like those provided for human maternal lineages. Although a standardized classification has been proposed, haplotype names within clades have been assigned gradually without considering the evolutionary history of dog mtDNA. Moreover, this classification is based only on the D-loop region, proven to be insufficient for phylogenetic purposes due to its high number of recurrent mutations and the lack of relevant information present in the coding region. In this study, we design 1) a refined mtDNA cladistic nomenclature from a phylogenetic tree based on complete sequences, classifying dog maternal lineages into haplogroups defined by specific diagnostic mutations, and 2) a coding region SNP analysis that allows a more accurate classification into haplogroups when combined with D-loop sequencing, thus improving the phylogenetic information obtained in dog mitochondrial DNA studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Grid Convergence for Turbulent Flows(Invited)

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Rumsey, Christopher L.; Schwoppe, Axel

    2015-01-01

    A detailed grid convergence study has been conducted to establish accurate reference solutions corresponding to the one-equation linear eddy-viscosity Spalart-Allmaras turbulence model for two dimensional turbulent flows around the NACA 0012 airfoil and a flat plate. The study involved three widely used codes, CFL3D (NASA), FUN3D (NASA), and TAU (DLR), and families of uniformly refined structured grids that differ in the grid density patterns. Solutions computed by different codes on different grid families appear to converge to the same continuous limit, but exhibit different convergence characteristics. The grid resolution in the vicinity of geometric singularities, such as a sharp trailing edge, is found to be the major factor affecting accuracy and convergence of discrete solutions, more prominent than differences in discretization schemes and/or grid elements. The results reported for these relatively simple turbulent flows demonstrate that CFL3D, FUN3D, and TAU solutions are very accurate on the finest grids used in the study, but even those grids are not sufficient to conclusively establish an asymptotic convergence order.

  20. APPRIS 2017: principal isoforms for multiple gene sets

    PubMed Central

    Rodriguez-Rivas, Juan; Di Domenico, Tomás; Vázquez, Jesús; Valencia, Alfonso

    2018-01-01

    Abstract The APPRIS database (http://appris-tools.org) uses protein structural and functional features and information from cross-species conservation to annotate splice isoforms in protein-coding genes. APPRIS selects a single protein isoform, the ‘principal’ isoform, as the reference for each gene based on these annotations. A single main splice isoform reflects the biological reality for most protein coding genes and APPRIS principal isoforms are the best predictors of these main proteins isoforms. Here, we present the updates to the database, new developments that include the addition of three new species (chimpanzee, Drosophila melangaster and Caenorhabditis elegans), the expansion of APPRIS to cover the RefSeq gene set and the UniProtKB proteome for six species and refinements in the core methods that make up the annotation pipeline. In addition APPRIS now provides a measure of reliability for individual principal isoforms and updates with each release of the GENCODE/Ensembl and RefSeq reference sets. The individual GENCODE/Ensembl, RefSeq and UniProtKB reference gene sets for six organisms have been merged to produce common sets of splice variants. PMID:29069475

  1. Comparative simulations of microjetting using atomistic and continuous approaches in presence of viscosity and surface tension

    NASA Astrophysics Data System (ADS)

    Durand, Olivier; Soulard, Laurent; Jaouen, Stephane; Heuze, Olivier; Colombet, Laurent; Cieren, Emmanuel

    2017-06-01

    We compare, at similar scales, the processes of microjetting and ejecta production from shocked roughened metal surfaces by using atomistic and continuous approaches. The atomistic approach is based on very large scale molecular dynamics (MD) simulations. The continuous approach is based on Eulerian hydrodynamics simulations with adaptive mesh refinement; the simulations take into account the effects of viscosity and surface tension, and they use an equation of state calculated from the MD simulations. The microjetting is generated by shock-loading above its fusion point a three-dimensional tin crystal with an initial sinusoidal free surface perturbation, the crystal being set in contact with a vacuum. Several samples with homothetic wavelengths and amplitudes of defect are simulated in order to investigate the influence of the viscosity and surface tension of the metal. The simulations show that the hydrodynamic code reproduces with a very good agreement the distributions, calculated from the MD simulations, of the ejected mass and velocity along the jet. Both codes exhibit also a similar phenomenology of fragmentation of the metallic liquid sheets ejected.

  2. Development of a Patient-Reported Outcome Instrument to Evaluate Symptoms of Advanced NSCLC: Qualitative Research and Content Validity of the Non-Small Cell Lung Cancer Symptom Assessment Questionnaire (NSCLC-SAQ)

    PubMed Central

    Atkinson, Thomas M.; DeBusk, Kendra P.A.; Liepa, Astra M.; Scanlon, Michael; Coons, Stephen Joel

    2016-01-01

    PURPOSE To describe the process and results of the preliminary qualitative development of a new symptom-based PRO measure intended to assess treatment benefit in advanced non-small cell lung cancer (NSCLC) clinical trials. METHODS Individual qualitative interviews were conducted with adult NSCLC (Stage I–IV) patients in the US. Experienced interviewers conducted concept elicitation (CE) and cognitive interviews using semi-structured interview guides. The CE interview guide was used to elicit spontaneous reports of symptom experiences along with probing to further explore and confirm concepts. Interview transcripts were coded and analyzed by professional qualitative coders using Atlas.ti software, and were summarized by like-content using an iterative coding framework. Data from the CE interviews were considered alongside existing literature and clinical expert opinion during an item-generation process, leading to development of a preliminary version of the NSCLC Symptom Assessment Questionnaire (NSCLC-SAQ). Three waves of cognitive interviews were conducted to evaluate concept relevance, item interpretability, and structure of the draft items to facilitate further instrument refinement. FINDINGS Fifty-one patients (mean age 64.9 [SD=11.2]; 51.0% female) participated in the CE interviews. A total of 1,897 expressions of NSCLC-related symptoms were identified and coded in interview transcripts, representing approximately 42 distinct symptom concepts. A 9-item initial draft instrument was developed for testing in three waves of cognitive interviews with additional NSCLC patients (n=20), during which both paper and electronic versions of the instrument were evaluated and refined. Participant responses and feedback during cognitive interviews led to the removal of 2 items and substantial modifications to others. IMPLICATIONS The NSCLC-SAQ is a 7-item PRO measure intended for use in advanced NSCLC clinical trials to support medical product labelling. The NSCLC-SAQ uses a 7-day recall period and verbal rating scales. It was developed in accordance with the FDA’s PRO Guidance and scientific best practices, and the resulting qualitative interview data provide evidence of content validity. The NSCLC-SAQ has been prepared in both paper and electronic administration formats and a tablet computer-based version is currently undergoing quantitative testing to confirm its measurement properties and support FDA qualification. PMID:27041408

  3. Influence of patients' socioeconomic status on clinical management decisions: a qualitative study.

    PubMed

    Bernheim, Susannah M; Ross, Joseph S; Krumholz, Harlan M; Bradley, Elizabeth H

    2008-01-01

    Little is known about how patients' socioeconomic status (SES) influences physicians' clinical management decisions, although this information may have important implications for understanding inequities in health care quality. We investigated physician perspectives on how patients' SES influences care. The study consisted of in-depth semistructured interviews with primary care physicians in Connecticut. Investigators coded interviews line by line and refined the coding structure and interview guide based on successive interviews. Recurrent themes emerged through iterative analysis of codes and tagged quotations. We interviewed 18 physicians from varied practice settings, 6 female, 9 from minority racial backgrounds, and 3 of Hispanic ethnicity. Four themes emerged from our interviews: (1) physicians held conflicting views about the effect of patient SES on clinical management, (2) physicians believed that changes in clinical management based on the patient's SES were made in the patient's interest, (3) physicians varied in the degree to which they thought changes in clinical management influenced patient outcomes, and (4) physicians faced personal and financial strains when caring for patients of low SES. Physicians indicated that patient SES did affect their clinical management decisions. As a result, physicians commonly undertook changes to their management plan in an effort to enhance patient outcomes, but they experienced numerous strains when trying to balance what they believed was feasible for the patient with what they perceived as established standards of care.

  4. A systematic review of validated methods for identifying pulmonary fibrosis and interstitial lung disease using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  5. A systematic review of validated methods for identifying anaphylaxis, including anaphylactic shock and angioneurotic edema, using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Validation and Refinement of a Pain Information Model from EHR Flowsheet Data.

    PubMed

    Westra, Bonnie L; Johnson, Steven G; Ali, Samira; Bavuso, Karen M; Cruz, Christopher A; Collins, Sarah; Furukawa, Meg; Hook, Mary L; LaFlamme, Anne; Lytle, Kay; Pruinelli, Lisiane; Rajchel, Tari; Settergren, Theresa Tess; Westman, Kathryn F; Whittenburg, Luann

    2018-01-01

    Secondary use of electronic health record (EHR) data can reduce costs of research and quality reporting. However, EHR data must be consistent within and across organizations. Flowsheet data provide a rich source of interprofessional data and represents a high volume of documentation; however, content is not standardized. Health care organizations design and implement customized content for different care areas creating duplicative data that is noncomparable. In a prior study, 10 information models (IMs) were derived from an EHR that included 2.4 million patients. There was a need to evaluate the generalizability of the models across organizations. The pain IM was selected for evaluation and refinement because pain is a commonly occurring problem associated with high costs for pain management. The purpose of our study was to validate and further refine a pain IM from EHR flowsheet data that standardizes pain concepts, definitions, and associated value sets for assessments, goals, interventions, and outcomes. A retrospective observational study was conducted using an iterative consensus-based approach to map, analyze, and evaluate data from 10 organizations. The aggregated metadata from the EHRs of 8 large health care organizations and the design build in 2 additional organizations represented flowsheet data from 6.6 million patients, 27 million encounters, and 683 million observations. The final pain IM has 30 concepts, 4 panels (classes), and 396 value set items. Results are built on Logical Observation Identifiers Names and Codes (LOINC) pain assessment terms and extend the need for additional terms to support interoperability. The resulting pain IM is a consensus model based on actual EHR documentation in the participating health systems. The IM captures the most important concepts related to pain. Schattauer GmbH Stuttgart.

  7. Thermo-mechanically coupled subduction with a free surface using ASPECT

    NASA Astrophysics Data System (ADS)

    Fraters, Menno; Glerum, Anne; Thieulot, Cedric; Spakman, Wim

    2014-05-01

    ASPECT (Kronbichler et al., 2012), short for Advanced Solver for Problems in Earth's ConvecTion, is a new Finite Element code which was originally designed for thermally driven (mantle) convection and is built on state of the art numerical methods (adaptive mesh refinement, linear and nonlinear solver, stabilization of transport dominated processes and a high scalability on multiple processors). Here we present an application of ASPECT to modeling of fully thermo-mechanically coupled subduction. Our subduction model contains three different compositions: a crustal composition on top of both the subducting slab and the overriding plate, a mantle composition and a sticky air composition, which allows for simulating a free surface for modeling topography build-up. We implemented a visco-plastic rheology using frictional plasticity and a composite viscosity defined by diffusion and dislocation creep. The lithospheric mantle has the same composition as the mantle but has a higher viscosity because of a lower temperature. The temperature field is implemented in ASPECT as follows: a linear temperature gradient for the lithosphere and an adiabatic geotherm for the sublithospheric mantle. Initial slab temperature is defined using the analytical solution of McKenzie (1970). The plates can be pushed from the sides of the model, and it is possible to define an additional independent mantle in/out flow through the boundaries. We will show a preliminary set of models, highlighting the codes capabilities, such as the Adaptive Mesh Refinement, topography development and the influence of mantle flow on the subduction evolution. Kronbichler, M., Heister, T., and Bangerth, W. (2012), High accuracy mantle convection simulation through modern numerical methods, Geophysical Journal International,191, 12-29, doi:10.1111/j.1365-246X.2012.05609. McKenzie, D.P. (1970), Temperature and potential temperature beneath island arcs, Teconophysics, 10, 357-366, doi:10.1016/0040-1951(70)90115-0.

  8. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  9. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  10. Developing a patient-centered outcome measure for complementary and alternative medicine therapies II: Refining content validity through cognitive interviews

    PubMed Central

    2011-01-01

    Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409

  11. Simulating Space Capsule Water Landing with Explicit Finite Element Method

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lyle, Karen H.

    2007-01-01

    A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.

  12. Development of evaluation models of manpower needs for dismantling the dry conversion process-related equipment in uranium refining and conversion plant (URCP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sari Izumo; Hideo Usui; Mitsuo Tachibana

    Evaluation models for determining the manpower needs for dismantling various types of equipment in uranium refining and conversion plant (URCP) have been developed. The models are widely applicable to other uranium handling facilities. Additionally, a simplified model was developed for easily and accurately calculating the manpower needs for dismantling dry conversion process-related equipment (DP equipment). It is important to evaluate beforehand project management data such as manpower needs to prepare an optimized decommissioning plan and implement effective dismantling activity. The Japan Atomic Energy Agency (JAEA) has developed the project management data evaluation system for dismantling activities (PRODIA code), which canmore » generate project management data using evaluation models. For preparing an optimized decommissioning plan, these evaluation models should be established based on the type of nuclear facility and actual dismantling data. In URCP, the dry conversion process of reprocessed uranium and others was operated until 1999, and the equipment related to the main process was dismantled from 2008 to 2011. Actual data such as manpower for dismantling were collected during the dismantling activities, and evaluation models were developed using the collected actual data on the basis of equipment classification considering the characteristics of uranium handling facility. (authors)« less

  13. Pre-test of questions on health-related resource use and expenditure, using behaviour coding and cognitive interviewing techniques.

    PubMed

    Chernyak, Nadja; Ernsting, Corinna; Icks, Andrea

    2012-09-06

    Validated instruments collecting data on health-related resource use are lacking, but required, for example, to investigate predictors of healthcare use or for health economic evaluation.The objective of the study was to develop, test and refine a questionnaire collecting data on health-related resource use and expenditure in patients with diabetes. The questionnaire was tested in 43 patients with diabetes mellitus types 1 and 2 in Germany. Response behaviour suggestive of problems with questions (item non-response, request for clarification, comments, inadequate answer, "don't know") was systematically registered. Cognitive interviews focusing on information retrieval and comprehension problems were carried out. Many participants had difficulties answering questions pertaining to frequency of visits to the general practitioner (26%), time spent receiving healthcare services (39%), regular medication currently taken (35%) and out of pocket expenditure on medication (42%). These difficulties seem to result mainly from poor memory. A number of comprehension problems were established and relevant questions were revised accordingly. The questionnaire on health-related resource use and expenditure for use in diabetes research in Germany was developed and refined after careful testing. Ideally, the questionnaire should be externally validated for different modes of administration and recall periods within a variety of populations.

  14. Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges

    NASA Technical Reports Server (NTRS)

    Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel

    2010-01-01

    The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.

  15. Refinement of Strut-and-Tie Model for Reinforced Concrete Deep Beams

    PubMed Central

    Panjehpour, Mohammad; Chai, Hwa Kian; Voo, Yen Lei

    2015-01-01

    Deep beams are commonly used in tall buildings, offshore structures, and foundations. According to many codes and standards, strut-and-tie model (STM) is recommended as a rational approach for deep beam analyses. This research focuses on the STM recommended by ACI 318-11 and AASHTO LRFD and uses experimental results to modify the strut effectiveness factor in STM for reinforced concrete (RC) deep beams. This study aims to refine STM through the strut effectiveness factor and increase result accuracy. Six RC deep beams with different shear span to effective-depth ratios (a/d) of 0.75, 1.00, 1.25, 1.50, 1.75, and 2.00 were experimentally tested under a four-point bending set-up. The ultimate shear strength of deep beams obtained from non-linear finite element modeling and STM recommended by ACI 318-11 as well as AASHTO LRFD (2012) were compared with the experimental results. An empirical equation was proposed to modify the principal tensile strain value in the bottle-shaped strut of deep beams. The equation of the strut effectiveness factor from AASHTTO LRFD was then modified through the aforementioned empirical equation. An investigation on the failure mode and crack propagation in RC deep beams subjected to load was also conducted. PMID:26110268

  16. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  17. GRASP/Ada 95: Reverse Engineering Tools for Ada

    NASA Technical Reports Server (NTRS)

    Cross, James H., II

    1996-01-01

    The GRASP/Ada project (Graphical Representations of Algorithms, Structures, and Processes for Ada) has successfully created and prototyped an algorithmic level graphical representation for Ada software, the Control Structure Diagram (CSD), and a new visualization for a fine-grained complexity metric called the Complexity Profile Graph (CPG). By synchronizing the CSD and the CPG, the CSD view of control structure, nesting, and source code is directly linked to the corresponding visualization of statement level complexity in the CPG. GRASP has been integrated with GNAT, the GNU Ada 95 Translator to provide a comprehensive graphical user interface and development environment for Ada 95. The user may view, edit, print, and compile source code as a CSD with no discernible addition to storage or computational overhead. The primary impetus for creation of the CSD was to improve the comprehension efficiency of Ada software and, as a result, improve reliability and reduce costs. The emphasis has been on the automatic generation of the CSD from Ada 95 source code to support reverse engineering and maintenance. The CSD has the potential to replace traditional prettyprinted Ada source code. The current update has focused on the design and implementation of a new Motif compliant user interface, and a new CSD generator consisting of a tagger and renderer. The Complexity Profile Graph (CPG) is based on a set of functions that describes the context, content, and the scaling for complexity on a statement by statement basis. When combined graphicafly, the result is a composite profile of complexity for the program unit. Ongoing research includes the development and refinement of the associated functions, and the development of the CPG generator prototype. The current Version 5.0 prototype provides the capability for the user to generate CSDs and CPGs from Ada 95 source code in a reverse engineering as well as forward engineering mode with a level of flexibility suitable for practical application. This report provides an overview of the GRASP/Ada project with an emphasis on the current update.

  18. Computational logic: its origins and applications

    PubMed Central

    2018-01-01

    Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the ‘logic for computable functions (LCF) approach’ pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users’ code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself. PMID:29507522

  19. Numerical simulation of flow in a high head Francis turbine with prediction of efficiency, rotor stator interaction and vortex structures in the draft tube

    NASA Astrophysics Data System (ADS)

    Jošt, D.; Škerlavaj, A.; Morgut, M.; Mežnar, P.; Nobile, E.

    2015-01-01

    The paper presents numerical simulations of flow in a model of a high head Francis turbine and comparison of results to the measurements. Numerical simulations were done by two CFD (Computational Fluid Dynamics) codes, Ansys CFX and OpenFOAM. Steady-state simulations were performed by k-epsilon and SST model, while for transient simulations the SAS SST ZLES model was used. With proper grid refinement in distributor and runner and with taking into account losses in labyrinth seals very accurate prediction of torque on the shaft, head and efficiency was obtained. Calculated axial and circumferential velocity components on two planes in the draft tube matched well with experimental results.

  20. MODTRAN cloud and multiple scattering upgrades with application to AVIRIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berk, A.; Bernstein, L.S.; Acharya, P.K.

    1998-09-01

    Recent upgrades to the MODTRAN atmospheric radiation code improve the accuracy of its radiance predictions, especially in the presence of clouds and thick aerosols, and for multiple scattering in regions of strong molecular line absorption. The current public-released version of MODTRAN (MODTRAN3.7) features a generalized specification of cloud properties, while the current research version of MODTRAN (MODTRAN4) implements a correlated-k (CK) approach for more accurate calculation of multiple scattered radiance. Comparisons to cloud measurements demonstrate the viability of the CK approach. The impact of these upgrades on predictions for AVIRIS viewing scenarios is discussed for both clear and clouded skies;more » the CK approach provides refined predictions for AVIRIS nadir and near-nadir viewing.« less

  1. In-Pile Sub-Miniature Fission Chambers Testing in BR2

    NASA Astrophysics Data System (ADS)

    Vermeeren, L.; Wéber, M.; Blandin, Ch.; Breaud, S.

    2003-06-01

    Three innovative sub-miniature fission chambers (SMFC), designed and manufactured at the Nuclear Measurement Systems Laboratory (LSMN) of CEA/Cadarache, were extensively tested in the BR2 research reactor at SCK•CEN, Mol. We present the experimental results for the (thermal) neutron sensitivity, the gamma-induced signal, the signal due to activation, the current picked up by the signal cable, the global current/voltage characteristics and the long term behaviour up to a thermal neutron fluence of 2.7·1021 n/cm2. We also compare the data with results from calculations with our FCD computer code. The onset of the saturation domain is well predicted by FCD; the neutron sensitivities can be accounted for perfectly after a refinement of the FCD model.

  2. What we were asked to do

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Recommendations are made after 32 interviews, lesson identification, lesson analysis, and mission characteristics identification. The major recommendations are as follows: (1) to develop end-to-end planning and scheduling operations concepts by mission class and to ensure their consideration in system life cycle documentation; (2) to create an organizational infrastructure at the Code 500 level, supported by a Directorate level steering committee with project representation, responsible for systems engineering of end-to-end planning and scheduling systems; (3) to develop and refine mission capabilities to assess impacts of early mission design decisions on planning and scheduling; and (4) to emphasize operational flexibility in the development of the Advanced Space Network, other institutional resources, external (e.g., project) capabilities and resources, operational software and support tools.

  3. DNA octaplex formation with an I-motif of water-mediated A-quartets: reinterpretation of the crystal structure of d(GCGAAAGC).

    PubMed

    Sato, Yoshiteru; Mitomi, Kenta; Sunami, Tomoko; Kondo, Jiro; Takénaka, Akio

    2006-12-01

    The crystal structure of the tetragonal form of d(gcGAAAgc) has been revised and reasonably refined including the disordered residues. The two DNA strands form a base-intercalated duplex, and the four duplexes are assembled according to the crystallographic 222 symmetry to form an octaplex. In the central region, the eight strands are associated by I-motif of double A-quartets. Furthermore, eight hydrated-magnesium cations link the four duplexes to support the octaplex formation. Based on these structural features, a proposal that folding of d(GAAA)n, found in the non-coding region of genomes, into an octaplex can induce slippage during replication to facilitate length polymorphism is presented.

  4. Composite theory applied to elastomers

    NASA Technical Reports Server (NTRS)

    Clark, S. K.

    1986-01-01

    Reinforced elastomers form the basis for most of the structural or load carrying applications of rubber products. Computer based structural analysis in the form of finite element codes was highly successful in refining structural design in both isotropic materials and rigid composites. This has lead the rubber industry to attempt to make use of such techniques in the design of structural cord-rubber composites. While such efforts appear promising, they were not easy to achieve for several reasons. Among these is a distinct lack of a clearly defined set of material property descriptors suitable for computer analysis. There are substantial differences between conventional steel, aluminum, or even rigid composites such as graphite-epoxy, and textile-cord reinforced rubber. These differences which are both conceptual and practical are discussed.

  5. VizieR Online Data Catalog: FARGO_THORIN 1.0 hydrodynamic code (Chrenko+, 2017)

    NASA Astrophysics Data System (ADS)

    Chrenko, O.; Broz, M.; Lambrechts, M.

    2017-07-01

    This archive contains the source files, documentation and example simulation setups of the FARGO_THORIN 1.0 hydrodynamic code. The program was introduced, described and used for simulations in the paper. It is built on top of the FARGO code (Masset, 2000A&AS..141..165M, Baruteau & Masset, 2008ApJ...672.1054B) and it is also interfaced with the REBOUND integrator package (Rein & Liu, 2012A&A...537A.128R). THORIN stands for Two-fluid HydrOdynamics, the Rebound integrator Interface and Non-isothermal gas physics. The program is designed for self-consistent investigations of protoplanetary systems consisting of a gas disk, a disk of small solid particles (pebbles) and embedded protoplanets. Code features: I) Non-isothermal gas disk with implicit numerical solution of the energy equation. The implemented energy source terms are: Compressional heating, viscous heating, stellar irradiation, vertical escape of radiation, radiative diffusion in the midplane and radiative feedback to accretion heating of protoplanets. II) Planets evolved in 3D, with close encounters allowed. The orbits are integrated using the IAS15 integrator (Rein & Spiegel, 2015MNRAS.446.1424R). The code detects the collisions among planets and resolve them as mergers. III) Refined treatment of the planet-disk gravitational interaction. The code uses a vertical averaging of the gravitational potential, as outlined in Muller & Kley (2012A&A...539A..18M). IV) Pebble disk represented by an Eulerian, presureless and inviscid fluid. The pebble dynamics is affected by the Epstein gas drag and optionally by the diffusive effects. We also implemented the drag back-reaction term into the Navier-Stokes equation for the gas. Archive summary: ------------------------------------------------------------------------- directory/file Explanation ------------------------------------------------------------------------- /in_relax Contains setup of the first example simulation /in_wplanet Contains setup of the second example simulation /srcmain Contains the source files of FARGOTHORIN /src_reb Contains the source files of the REBOUND integrator package to be linked with THORIN GUNGPL3 GNU General Public License, version 3 LICENSE License agreement README Simple user's guide UserGuide.pdf Extended user's guide refman.pdf Programer's guide ----------------------------------------------------------------------------- (1 data file).

  6. Dynamic fisheye grids for binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Noble, Scott C.

    2014-03-01

    We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.

  7. Non-coding RNAs and plant male sterility: current knowledge and future prospects.

    PubMed

    Mishra, Ankita; Bohra, Abhishek

    2018-02-01

    Latest outcomes assign functional role to non-coding (nc) RNA molecules in regulatory networks that confer male sterility to plants. Male sterility in plants offers great opportunity for improving crop performance through application of hybrid technology. In this respect, cytoplasmic male sterility (CMS) and sterility induced by photoperiod (PGMS)/temperature (TGMS) have greatly facilitated development of high-yielding hybrids in crops. Participation of non-coding (nc) RNA molecules in plant reproductive development is increasingly becoming evident. Recent breakthroughs in rice definitively associate ncRNAs with PGMS and TGMS. In case of CMS, the exact mechanism through which the mitochondrial ORFs exert influence on the development of male gametophyte remains obscure in several crops. High-throughput sequencing has enabled genome-wide discovery and validation of these regulatory molecules and their target genes, describing their potential roles performed in relation to CMS. Discovery of ncRNA localized in plant mtDNA with its possible implication in CMS induction is intriguing in this respect. Still, conclusive evidences linking ncRNA with CMS phenotypes are currently unavailable, demanding complementing genetic approaches like transgenics to substantiate the preliminary findings. Here, we review the recent literature on the contribution of ncRNAs in conferring male sterility to plants, with an emphasis on microRNAs. Also, we present a perspective on improved understanding about ncRNA-mediated regulatory pathways that control male sterility in plants. A refined understanding of plant male sterility would strengthen crop hybrid industry to deliver hybrids with improved performance.

  8. White Dwarf Mergers On Adaptive Meshes. I. Methodology And Code Verification

    DOE PAGES

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; ...

    2016-03-02

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first study in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this papermore » we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Finally, future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.« less

  9. The descriptive epidemiology of sports/leisure-related heat illness hospitalisations in New South Wales, Australia.

    PubMed

    Finch, Caroline F; Boufous, Soufiane

    2008-01-01

    Sport-related heat illness has not been commonly studied from an epidemiological perspective. This study presents the descriptive epidemiology of sports/leisure-related heat illness hospitalisations in New South Wales, Australia. All in-patient separations from all acute hospitals in NSW during 2001-2004, with an International Classification of Diseases external cause of injury code indicating "exposure to excessive natural heat (X30)" or any ICD-10 diagnosis code in the range: "effects of heat and light (T67.0-T67.9)", were analysed. The sport/leisure relatedness of cases was defined by ICD-10-AM activity codes indicating involvement in sport/leisure activities. Cases of exposure to heat while engaged in sport/leisure were described by gender, year, age, principal diagnosis, type of activity/sport and length of stay. There were 109 hospital separations for exposure to heat while engaging in sport/leisure activity, with the majority occurring during the hottest months. The number of male cases significantly increased over the 4-year period and 45+ -year olds had the largest number of cases. Heat exhaustion was the leading cause of hospital separation (40% of cases). Marathon running, cricket and golf were the activities most commonly associated with heat-related hospitalisation. Ongoing development and refinement of expert position statements regarding heat illnesses need to draw on both epidemiological and physiological evidence to ensure their relevance to all levels of risk from the real world sport training and competition contexts.

  10. A Binary-Encounter-Bethe Approach to Simulate DNA Damage by the Direct Effect

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2013-01-01

    The DNA damage is of crucial importance in the understanding of the effects of ionizing radiation. The main mechanisms of DNA damage are by the direct effect of radiation (e.g. direct ionization) and by indirect effect (e.g. damage by.OH radicals created by the radiolysis of water). Despite years of research in this area, many questions on the formation of DNA damage remains. To refine existing DNA damage models, an approach based on the Binary-Encounter-Bethe (BEB) model was developed[1]. This model calculates differential cross sections for ionization of the molecular orbitals of the DNA bases, sugars and phosphates using the electron binding energy, the mean kinetic energy and the occupancy number of the orbital. This cross section has an analytic form which is quite convenient to use and allows the sampling of the energy loss occurring during an ionization event. To simulate the radiation track structure, the code RITRACKS developed at the NASA Johnson Space Center is used[2]. This code calculates all the energy deposition events and the formation of the radiolytic species by the ion and the secondary electrons as well. We have also developed a technique to use the integrated BEB cross section for the bases, sugar and phosphates in the radiation transport code RITRACKS. These techniques should allow the simulation of DNA damage by ionizing radiation, and understanding of the formation of double-strand breaks caused by clustered damage in different conditions.

  11. Ion channeling study of defects in compound crystals using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Turos, A.; Jozwik, P.; Nowicki, L.; Sathish, N.

    2014-08-01

    Ion channeling is a well-established technique for determination of structural properties of crystalline materials. Defect depth profiles have been usually determined basing on the two-beam model developed by Bøgh (1968) [1]. As long as the main research interest was focused on single element crystals it was considered as sufficiently accurate. New challenge emerged with growing technological importance of compound single crystals and epitaxial heterostructures. Overlap of partial spectra due to different sublattices and formation of complicated defect structures makes the two beam method hardly applicable. The solution is provided by Monte Carlo computer simulations. Our paper reviews principal aspects of this approach and the recent developments in the McChasy simulation code. The latter made it possible to distinguish between randomly displaced atoms (RDA) and extended defects (dislocations, loops, etc.). Hence, complex defect structures can be characterized by the relative content of these two components. The next refinement of the code consists of detailed parameterization of dislocations and dislocation loops. Defect profiles for variety of compound crystals (GaN, ZnO, SrTiO3) have been measured and evaluated using the McChasy code. Damage accumulation curves for RDA and extended defects revealed non monotonous defect buildup with some characteristic steps. Transition to each stage is governed by the different driving force. As shown by the complementary high resolution XRD measurements lattice strain plays here the crucial role and can be correlated with the concentration of extended defects.

  12. A systematic review of validated methods for identifying erythema multiforme major/minor/not otherwise specified, Stevens-Johnson Syndrome, or toxic epidermal necrolysis using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of erythema multiforme and related conditions. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the erythema multiforme HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles that used administrative and claims data to identify erythema multiforme, Stevens-Johnson syndrome, or toxic epidermal necrolysis and that included validation estimates of the coding algorithms. Our search revealed limited literature focusing on erythema multiforme and related conditions that provided administrative and claims data-based algorithms and validation estimates. Only four studies provided validated algorithms and all studies used the same International Classification of Diseases code, 695.1. Approximately half of cases subjected to expert review were consistent with erythema multiforme and related conditions. Updated research needs to be conducted on designing validation studies that test algorithms for erythema multiforme and related conditions and that take into account recent changes in the diagnostic coding of these diseases. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Regional Epidemiologic Assessment of Prevalent Periodontitis Using an Electronic Health Record System

    PubMed Central

    Acharya, Amit; VanWormer, Jeffrey J.; Waring, Stephen C.; Miller, Aaron W.; Fuehrer, Jay T.; Nycz, Gregory R.

    2013-01-01

    An oral health surveillance platform that queries a clinical/administrative data warehouse was applied to estimate regional prevalence of periodontitis. Cross-sectional analysis of electronic health record data collected between January 1, 2006, and December 31, 2010, was undertaken in a population sample residing in Ladysmith, Wisconsin. Eligibility criteria included: 1) residence in defined zip codes, 2) age 25–64 years, and 3) ≥1 Marshfield dental clinic comprehensive examination. Prevalence was established using 2 independent methods: 1) via an algorithm that considered clinical attachment loss and probe depth and 2) via standardized Current Dental Terminology (CDT) codes related to periodontal treatment. Prevalence estimates were age-standardized to 2000 US Census estimates. Inclusion criteria were met by 2,056 persons. On the basis of the American Academy of Periodontology/Centers for Disease Control and Prevention method, the age-standardized prevalence of moderate or severe periodontitis (combined) was 407 per 1,000 males and 308 per 1,000 females (348/1,000 males and 269/1,000 females using the CDT code method). Increased prevalence and severity of periodontitis was noted with increasing age. Local prevalence of periodontitis was consistent with national estimates. The need to address potential sample selection bias in future electronic health record–based periodontitis research was identified by this approach. Methods outlined herein may be applied to refine oral health surveillance systems, inform dental epidemiologic methods, and evaluate interventional outcomes. PMID:23462966

  14. Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code

    NASA Astrophysics Data System (ADS)

    Phillips, William; Russwurm, George M.

    1999-02-01

    This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.

  15. 40 CFR 80.1344 - What provisions are available to a non-small refiner that acquires one or more of a small refiner...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-small refiner that acquires one or more of a small refiner's refineries? 80.1344 Section 80.1344... available to a non-small refiner that acquires one or more of a small refiner's refineries? (a) In the case of a refiner that is not an approved small refiner under § 80.1340 and that acquires a refinery from...

  16. 40 CFR 80.555 - What provisions are available to a large refiner that acquires a small refiner or one or more of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... large refiner that acquires a small refiner or one or more of its refineries? 80.555 Section 80.555... that acquires a small refiner or one or more of its refineries? (a) In the case of a refiner without approved small refiner status who acquires a refinery from a refiner with approved status as a motor...

  17. Numerical study of three-dimensional separation and flow control at a wing/body junction

    NASA Technical Reports Server (NTRS)

    Ash, Robert L.; Lakshmanan, Balakrishnan

    1989-01-01

    The problem of three-dimensional separation and flow control at a wing/body junction has been investigated numerically using a three-dimensional Navier-Stokes code. The numerical code employs an algebraic grid generation technique for generating the grid for unmodified junction and an elliptic grid generation technique for filleted fin junction. The results for laminar flow past a blunt fin/flat plate junction demonstrate that after grid refinement, the computations agree with experiment and reveal a strong dependency of the number of vortices at the junction on Mach number and Reynolds number. The numerical results for pressure distribution, particle paths and limiting streamlines for turbulent flow past a swept fin show a decrease in the peak pressure and in the extent of the separated flow region compared to the laminar case. The results for a filleted juncture indicate that the streamline patterns lose much of their vortical character with proper filleting. Fillets with a radius of three and one-half times the fin leading edge diameter or two times the incoming boundary layer thickness, significantly weaken the usual necklace interaction vortex for the Mach number and Reynolds number considered in the present study.

  18. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  19. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  20. Mixing of the Interstellar and Solar Plasmas at the Heliospheric Interface

    DOE PAGES

    Pogorelov, N. V.; Borovikov, S. N.

    2015-10-12

    From the ideal MHD perspective, the heliopause is a tangential discontinuity that separates the solar wind plasma from the local interstellar medium plasma. There are physical processes, however, that make the heliopause permeable. They can be subdivided into kinetic and MHD categories. Kinetic processes occur on small length and time scales, and cannot be resolved with MHD equations. On the other hand, MHD instabilities of the heliopause have much larger scales and can be easily observed by spacecraft. The heliopause may also be a subject of magnetic reconnection. In this paper, we discuss mechanisms of plasma mixing at the heliopausemore » in the context of Voyager 1 observations. Numerical results are obtained with a Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS), which is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. The code can also treat nonthermal ions and turbulence produced by them.« less

  1. Validation and Verification of LADEE Models and Software

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen

    2013-01-01

    The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.

  2. Numerical Analysis of Ginzburg-Landau Models for Superconductivity.

    NASA Astrophysics Data System (ADS)

    Coskun, Erhan

    Thin film conventional, as well as High T _{c} superconductors of various geometric shapes placed under both uniform and variable strength magnetic field are studied using the universially accepted macroscopic Ginzburg-Landau model. A series of new theoretical results concerning the properties of solution is presented using the semi -discrete time-dependent Ginzburg-Landau equations, staggered grid setup and natural boundary conditions. Efficient serial algorithms including a novel adaptive algorithm is developed and successfully implemented for solving the governing highly nonlinear parabolic system of equations. Refinement technique used in the adaptive algorithm is based on modified forward Euler method which was also developed by us to ease the restriction on time step size for stability considerations. Stability and convergence properties of forward and modified forward Euler schemes are studied. Numerical simulations of various recent physical experiments of technological importance such as vortes motion and pinning are performed. The numerical code for solving time-dependent Ginzburg-Landau equations is parallelized using BlockComm -Chameleon and PCN. The parallel code was run on the distributed memory multiprocessors intel iPSC/860, IBM-SP1 and cluster of Sun Sparc workstations, all located at Mathematics and Computer Science Division, Argonne National Laboratory.

  3. Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging

    NASA Astrophysics Data System (ADS)

    Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.

    The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.

  4. Reconstruction of Axial Energy Deposition in Magnetic Liner Inertial Fusion Based on PECOS Shadowgraph Unfolds Using the AMR Code FLASH

    NASA Astrophysics Data System (ADS)

    Adams, Marissa; Jennings, Christopher; Slutz, Stephen; Peterson, Kyle; Gourdain, Pierre; U. Rochester-Sandia Collaboration

    2017-10-01

    Magnetic Liner Inertial Fusion (MagLIF) experiments incorporate a laser to preheat a deuterium filled capsule before compression via a magnetically imploding liner. In this work, we focus on the blast wave formed in the fuel during the laser preheat component of MagLIF, where approximately 1kJ of energy is deposited in 3ns into the capsule axially before implosion. To model blast waves directly relevant to experiments such as MagLIF, we inferred deposited energy from shadowgraphy of laser-only experiments preformed at the PECOS target chamber using the Z-Beamlet laser. These energy profiles were used to initialize 2-dimensional simulations using by the adaptive mesh refinement code FLASH. Gradients or asymmetries in the energy deposition may seed instabilities that alter the fuel's distribution, or promote mix, as the blast wave interacts with the liner wall. The AMR capabilities of FLASH allow us to study the development and dynamics of these instabilities within the fuel and their effect on the liner before implosion. Sandia Natl Labs is managed by NTES of Sandia, LLC., a subsidiary of Honeywell International, Inc, for the U.S. DOEs NNSA under contract DE-NA0003525.

  5. 40 CFR 80.553 - Under what conditions may the small refiner gasoline sulfur standards be extended for a small...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? 80.553... small refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? (a) A refiner that has been approved by EPA for small refiner gasoline sulfur standards under § 80.240...

  6. 40 CFR 80.553 - Under what conditions may the small refiner gasoline sulfur standards be extended for a small...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? 80.553... small refiner gasoline sulfur standards be extended for a small refiner of motor vehicle diesel fuel? (a) A refiner that has been approved by EPA for small refiner gasoline sulfur standards under § 80.240...

  7. Comprehensive Molecular Characterization of Muscle-Invasive Bladder Cancer.

    PubMed

    Robertson, A Gordon; Kim, Jaegil; Al-Ahmadie, Hikmat; Bellmunt, Joaquim; Guo, Guangwu; Cherniack, Andrew D; Hinoue, Toshinori; Laird, Peter W; Hoadley, Katherine A; Akbani, Rehan; Castro, Mauro A A; Gibb, Ewan A; Kanchi, Rupa S; Gordenin, Dmitry A; Shukla, Sachet A; Sanchez-Vega, Francisco; Hansel, Donna E; Czerniak, Bogdan A; Reuter, Victor E; Su, Xiaoping; de Sa Carvalho, Benilton; Chagas, Vinicius S; Mungall, Karen L; Sadeghi, Sara; Pedamallu, Chandra Sekhar; Lu, Yiling; Klimczak, Leszek J; Zhang, Jiexin; Choo, Caleb; Ojesina, Akinyemi I; Bullman, Susan; Leraas, Kristen M; Lichtenberg, Tara M; Wu, Catherine J; Schultz, Nicholaus; Getz, Gad; Meyerson, Matthew; Mills, Gordon B; McConkey, David J; Weinstein, John N; Kwiatkowski, David J; Lerner, Seth P

    2017-10-19

    We report a comprehensive analysis of 412 muscle-invasive bladder cancers characterized by multiple TCGA analytical platforms. Fifty-eight genes were significantly mutated, and the overall mutational load was associated with APOBEC-signature mutagenesis. Clustering by mutation signature identified a high-mutation subset with 75% 5-year survival. mRNA expression clustering refined prior clustering analyses and identified a poor-survival "neuronal" subtype in which the majority of tumors lacked small cell or neuroendocrine histology. Clustering by mRNA, long non-coding RNA (lncRNA), and miRNA expression converged to identify subsets with differential epithelial-mesenchymal transition status, carcinoma in situ scores, histologic features, and survival. Our analyses identified 5 expression subtypes that may stratify response to different treatments. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  9. An upwind multigrid method for solving viscous flows on unstructured triangular meshes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl Lawrence

    1993-01-01

    A multigrid algorithm is combined with an upwind scheme for solving the two dimensional Reynolds averaged Navier-Stokes equations on triangular meshes resulting in an efficient, accurate code for solving complex flows around multiple bodies. The relaxation scheme uses a backward-Euler time difference and relaxes the resulting linear system using a red-black procedure. Roe's flux-splitting scheme is used to discretize convective and pressure terms, while a central difference is used for the diffusive terms. The multigrid scheme is demonstrated for several flows around single and multi-element airfoils, including inviscid, laminar, and turbulent flows. The results show an appreciable speed up of the scheme for inviscid and laminar flows, and dramatic increases in efficiency for turbulent cases, especially those on increasingly refined grids.

  10. Cytogenetic mapping of a novel locus for type II Waardenburg syndrome.

    PubMed

    Selicorni, Angelo; Guerneri, Silvana; Ratti, Antonia; Pizzuti, Antonio

    2002-01-01

    An Italian family in which Waardenburg syndrome type II (WS2) segregates together with a der(8) chromosome from a (4p;8p) balanced translocation was studied. Cytogenetic analysis by painting and subtelomeric probe hybridization positioned the chromosome 8 breakpoint at p22-pter. Fluorescence in situ hybridization analysis with yeast artificial chromosomes from a contig spanning the 8p21-pter region refined the breakpoint in an interval of less than 170 kb between markers WI-3823 and D8S1819. The only cloned gene for WS2 is that for microphtalmia (MITF) on chromosome 3p. In this family, MITF mutations were excluded by sequencing the whole coding region. The 8p23 region may represent a third locus for WS2 (WS2C).

  11. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  12. Substructure program for analysis of helicopter vibrations

    NASA Technical Reports Server (NTRS)

    Sopher, R.

    1981-01-01

    A substructure vibration analysis which was developed as a design tool for predicting helicopter vibrations is described. The substructure assembly method and the composition of the transformation matrix are analyzed. The procedure for obtaining solutions to the equations of motion is illustrated for the steady-state forced response solution mode, and rotor hub load excitation and impedance are analyzed. Calculation of the mass, damping, and stiffness matrices, as well as the forcing function vectors of physical components resident in the base program code, are discussed in detail. Refinement of the model is achieved by exercising modules which interface with the external program to represent rotor induced variable inflow and fuselage induced variable inflow at the rotor. The calculation of various flow fields is discussed, and base program applications are detailed.

  13. Accuracy Quantification of the Loci-CHEM Code for Chamber Wall Heat Transfer in a GO2/GH2 Single Element Model Problem

    NASA Technical Reports Server (NTRS)

    West, Jeff; Westra, Doug; Lin, Jeff; Tucker, Kevin

    2006-01-01

    All solutions with Loci-CHEM achieved demonstrated steady state and mesh convergence. Preconditioning had no effect on solution accuracy and typically yields a 3-5times solution speed-up. The SST turbulence model has superior performance, relative to the data in the head end region, for the rise rate and peak heat flux. It was slightly worse than the others in the downstream region where all over-predicted the data by 30-100%.There was systematic mesh refinement in the unstructured volume and structured boundary layer areas produced only minor solution differences. Mesh convergence was achieved. Overall, Loci-CHEM satisfactorily predicts heat flux rise rate and peak heat flux and significantly over predicts the downstream heat flux.

  14. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less

  15. Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks

    NASA Astrophysics Data System (ADS)

    Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi

    We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.

  16. Formal Assurance Arguments: A Solution In Search of a Problem?

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.

    2015-01-01

    An assurance case comprises evidence and argument showing how that evidence supports assurance claims (e.g., about safety or security). It is unsurprising that some computer scientists have proposed formalizing assurance arguments: most associate formality with rigor. But while engineers can sometimes prove that source code refines a formal specification, it is not clear that formalization will improve assurance arguments or that this benefit is worth its cost. For example, formalization might reduce the benefits of argumentation by limiting the audience to people who can read formal logic. In this paper, we present (1) a systematic survey of the literature surrounding formal assurance arguments, (2) an analysis of errors that formalism can help to eliminate, (3) a discussion of existing evidence, and (4) suggestions for experimental work to definitively answer the question.

  17. Global health education programming as a model for inter-institutional collaboration in interprofessional health education.

    PubMed

    Peluso, Michael J; Hafler, Janet P; Sipsma, Heather; Cherlin, Emily

    2014-07-01

    While global health (GH) opportunities have expanded at schools of medicine, nursing, and public health, few examples of interprofessional approaches to GH education have been described. The elective GH program at our university serves as an important opportunity for high-quality interprofessional education. We undertook a qualitative study to examine the experience of student, faculty and administrative leaders of the program. We used content analysis to code responses and analyze data. Among the leadership, key themes fell within the categories of interprofessional education, student-faculty collaboration, professional development, and practical considerations for the development of such programs. The principles described could be considered by institutions seeking to develop meaningful partnerships in an effort to develop or refine interprofessional global health education programs.

  18. Further investigations of the aeroelastic behavior of the AFW wind-tunnel model using transonic small disturbance theory

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Bennett, Robert M.

    1992-01-01

    The Computational Aeroelasticity Program-Transonic Small Disturbance (CAP-TSD) code, developed at LaRC, is applied to the active flexible wing wind-tunnel model for prediction of transonic aeroelastic behavior. A semi-span computational model is used for evaluation of symmetric motions, and a full-span model is used for evaluation of antisymmetric motions, and a full-span model is used for evaluation of antisymmetric motions. Static aeroelastic solutions using CAP-TSD are computed. Dynamic deformations are presented as flutter boundaries in terms of Mach number and dynamic pressure. Flutter boundaries that take into account modal refinements, vorticity and entropy corrections, antisymmetric motion, and sensitivity to the modeling of the wing tip ballast stores are also presented with experimental flutter results.

  19. Lessons Learned from Inlet Integration Analysis of NASA's Low Boom Flight Demonstrator

    NASA Technical Reports Server (NTRS)

    Friedlander, David; Heath, Christopher; Castner, Ray

    2017-01-01

    In 2016, NASA's Aeronautics Research Mission Directorate announced the New Aviation Horizons Initiative with a goal of designing/building several X-Planes, including a Low Boom Flight Demonstrator (LBFD). That same year, NASA awarded a contract to Lockheed Martin (LM) to advance the LBFD concept through preliminary design. Several configurations of the LBFD aircraft were analyzed by both LM engineers and NASA researchers. This presentation focuses on some of the CFD simulations that were run by NASA Glenn researchers. NASA's FUN3D V13.1 code was used for all adjoint-based grid refinement studies and Spalart-Allmaras turbulence model was used during adaptation. It was found that adjoint-based grid adaptation did not accurately capture inlet performance for high speed top-aft-mounted propulsion.

  20. Crystal Structure of Cocosin, A Potential Food Allergen from Coconut (Cocos nucifera).

    PubMed

    Jin, Tengchuan; Wang, Cheng; Zhang, Caiying; Wang, Yang; Chen, Yu-Wei; Guo, Feng; Howard, Andrew; Cao, Min-Jie; Fu, Tong-Jen; McHugh, Tara H; Zhang, Yuzhu

    2017-08-30

    Coconut (Cocos nucifera) is an important palm tree. Coconut fruit is widely consumed. The most abundant storage protein in coconut fruit is cocosin (a likely food allergen), which belongs to the 11S globulin family. Cocosin was crystallized near a century ago, but its structure remains unknown. By optimizing crystallization conditions and cryoprotectant solutions, we were able to obtain cocosin crystals that diffracted to 1.85 Å. The cocosin gene was cloned from genomic DNA isolated from dry coconut tissue. The protein sequence deduced from the predicted cocosin coding sequence was used to guide model building and structure refinement. The structure of cocosin was determined for the first time, and it revealed a typical 11S globulin feature of a double layer doughnut-shaped hexamer.

Top