Sample records for complex physics codes

  1. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  2. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  3. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  4. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  5. Numerical Studies of Impurities in Fusion Plasmas

    DOE R&D Accomplishments Database

    Hulse, R. A.

    1982-09-01

    The coupled partial differential equations used to describe the behavior of impurity ions in magnetically confined controlled fusion plasmas require numerical solution for cases of practical interest. Computer codes developed for impurity modeling at the Princeton Plasma Physics Laboratory are used as examples of the types of codes employed for this purpose. These codes solve for the impurity ionization state densities and associated radiation rates using atomic physics appropriate for these low-density, high-temperature plasmas. The simpler codes solve local equations in zero spatial dimensions while more complex cases require codes which explicitly include transport of the impurity ions simultaneously with the atomic processes of ionization and recombination. Typical applications are discussed and computational results are presented for selected cases of interest.

  6. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  7. OpenFOAM: Open source CFD in research and industry

    NASA Astrophysics Data System (ADS)

    Jasak, Hrvoje

    2009-12-01

    The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.

  8. Establishing confidence in complex physics codes: Art or science?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trucano, T.

    1997-12-31

    The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The authormore » elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.« less

  9. High-Assurance Spiral

    DTIC Science & Technology

    2017-11-01

    Public Release; Distribution Unlimited. PA# 88ABW-2017-5388 Date Cleared: 30 OCT 2017 13. SUPPLEMENTARY NOTES 14. ABSTRACT Cyber- physical systems... physical processes that interact in intricate manners. This makes verification of the software complex and unwieldy. In this report, an approach towards...resulting implementations. 15. SUBJECT TERMS Cyber- physical systems, Formal guarantees, Code generation 16. SECURITY CLASSIFICATION OF: 17

  10. Electromagnetic plasma simulation in realistic geometries

    NASA Astrophysics Data System (ADS)

    Brandon, S.; Ambrosiano, J. J.; Nielsen, D.

    1991-08-01

    Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.

  11. Blast Fragmentation Modeling and Analysis

    DTIC Science & Technology

    2010-10-31

    weapons device containing a multiphase blast explosive (MBX). 1. INTRODUCTION The ARL Survivability Lethality and Analysis Directorate (SLAD) is...velocity. In order to simulate the highly complex phenomenon, the exploding cylinder is modeled with the hydrodynamics code ALE3D , an arbitrary...Lagrangian-Eulerian multiphysics code, developed at Lawrence Livermore National Laboratory. ALE3D includes physical properties, constitutive models for

  12. Impact of the Primary Care Exception on Family Medicine Resident Coding.

    PubMed

    Cawse-Lucas, Jeanne; Evans, David V; Ruiz, David R; Allcut, Elizabeth A; Andrilla, C Holly A; Thompson, Matthew; Norris, Thomas E

    2016-03-01

    The Medicare Primary Care Exception (PCE) allows residents to see and bill for less-complex patients independently in the primary care setting, requiring attending physicians only to see patients for higher-level visits and complete physical exams in order to bill for them as such. Primary care residencies apply the PCE in various ways. We investigated the impact of the PCE on resident coding practices. Family medicine residency directors in a five-state region completed a survey regarding interpretation and application of the PCE, including the number of established patient evaluation and management codes entered by residents and attending faculty at their institution. The percentage of high-level codes was compared between residencies using chi-square tests. We analyzed coding data for 125,016 visits from 337 residents and 172 faculty physicians in 15 of 18 eligible family medicine residencies. Among programs applying the PCE criteria to all patients, residents billed 86.7% low-mid complexity and 13.3% high-complexity visits. In programs that only applied the PCE to Medicare patients, residents billed 74.9% low-mid complexity visits and 25.2% high-complexity visits. Attending physicians coded more high-complexity visits at both types of programs. The estimated revenue loss over the 1,650 RRC-required outpatient visits was $2,558.66 per resident and $57,569.85 per year for the average residency in our sample. Residents at family medicine programs that apply the PCE to all patients bill significantly fewer high-complexity visits. This finding leads to compliance and regulatory concerns and suggests significant revenue loss. Further study is required to determine whether this discrepancy also reflects inaccuracy in coding.

  13. Extension of the XGC code for global gyrokinetic simulations in stellarator geometry

    NASA Astrophysics Data System (ADS)

    Cole, Michael; Moritaka, Toseo; White, Roscoe; Hager, Robert; Ku, Seung-Hoe; Chang, Choong-Seock

    2017-10-01

    In this work, the total-f, gyrokinetic particle-in-cell code XGC is extended to treat stellarator geometries. Improvements to meshing tools and the code itself have enabled the first physics studies, including single particle tracing and flux surface mapping in the magnetic geometry of the heliotron LHD and quasi-isodynamic stellarator Wendelstein 7-X. These have provided the first successful test cases for our approach. XGC is uniquely placed to model the complex edge physics of stellarators. A roadmap to such a global confinement modeling capability will be presented. Single particle studies will include the physics of energetic particles' global stochastic motions and their effect on confinement. Good confinement of energetic particles is vital for a successful stellarator reactor design. These results can be compared in the core region with those of other codes, such as ORBIT3d. In subsequent work, neoclassical transport and turbulence can then be considered and compared to results from codes such as EUTERPE and GENE. After sufficient verification in the core region, XGC will move into the stellarator edge region including the material wall and neutral particle recycling.

  14. Addition of equilibrium air to an upwind Navier-Stokes code and other first steps toward a more generalized flow solver

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1991-01-01

    An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.

  15. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  16. Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process

    NASA Astrophysics Data System (ADS)

    Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.

    2017-05-01

    Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).

  17. Development of an object-oriented finite element program: application to metal-forming and impact simulations

    NASA Astrophysics Data System (ADS)

    Pantale, O.; Caperaa, S.; Rakotomalala, R.

    2004-07-01

    During the last 50 years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. In the same time, the finite element method has become a widely used tool for researchers and engineers. Recent advances in computational software have made possible to solve more physical and complex problems such as coupled problems, nonlinearities, high strain and high-strain rate problems. In this field, an accurate analysis of large deformation inelastic problems occurring in metal-forming or impact simulations is extremely important as a consequence of high amount of plastic flow. In this presentation, the object-oriented implementation, using the C++ language, of an explicit finite element code called DynELA is presented. The object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. The most significant advantage of OOP is in the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual sub-problems based on physical, mathematical or geometric reasoning. We first focus on the advantages of OOP for the development of scientific programs. Specific aspects of OOP, such as the inheritance mechanism, the operators overload procedure or the use of template classes are detailed. Then we present the approach used for the development of our finite element code through the presentation of the kinematics, conservative and constitutive laws and their respective implementation in C++. Finally, the efficiency and accuracy of our finite element program are investigated using a number of benchmark tests relative to metal forming and impact simulations.

  18. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Hales; R. L. Williamson; S. R. Novascone

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less

  19. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  20. nIFTY galaxy cluster simulations - III. The similarity and diversity of galaxies and subhaloes

    NASA Astrophysics Data System (ADS)

    Elahi, Pascal J.; Knebe, Alexander; Pearce, Frazer R.; Power, Chris; Yepes, Gustavo; Cui, Weiguang; Cunnama, Daniel; Kay, Scott T.; Sembolini, Federico; Beck, Alexander M.; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Teyssier, Romain

    2016-05-01

    We examine subhaloes and galaxies residing in a simulated Λ cold dark matter galaxy cluster (M^crit_{200}=1.1× 10^{15} h^{-1} M_{⊙}) produced by hydrodynamical codes ranging from classic smooth particle hydrodynamics (SPH), newer SPH codes, adaptive and moving mesh codes. These codes use subgrid models to capture galaxy formation physics. We compare how well these codes reproduce the same subhaloes/galaxies in gravity-only, non-radiative hydrodynamics and full feedback physics runs by looking at the overall subhalo/galaxy distribution and on an individual object basis. We find that the subhalo population is reproduced to within ≲10 per cent for both dark matter only and non-radiative runs, with individual objects showing code-to-code scatter of ≲0.1 dex, although the gas in non-radiative simulations shows significant scatter. Including feedback physics significantly increases the diversity. Subhalo mass and Vmax distributions vary by ≈20 per cent. The galaxy populations also show striking code-to-code variations. Although the Tully-Fisher relation is similar in almost all codes, the number of galaxies with 109 h- 1 M⊙ ≲ M* ≲ 1012 h- 1 M⊙ can differ by a factor of 4. Individual galaxies show code-to-code scatter of ˜0.5 dex in stellar mass. Moreover, systematic differences exist, with some codes producing galaxies 70 per cent smaller than others. The diversity partially arises from the inclusion/absence of active galactic nucleus feedback. Our results combined with our companion papers demonstrate that subgrid physics is not just subject to fine-tuning, but the complexity of building galaxies in all environments remains a challenge. We argue that even basic galaxy properties, such as stellar mass to halo mass, should be treated with errors bars of ˜0.2-0.4 dex.

  1. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  2. GBS: Global 3D simulation of tokamak edge region

    NASA Astrophysics Data System (ADS)

    Zhu, Ben; Fisher, Dustin; Rogers, Barrett; Ricci, Paolo

    2012-10-01

    A 3D two-fluid global code, namely Global Braginskii Solver (GBS), is being developed to explore the physics of turbulent transport, confinement, self-consistent profile formation, pedestal scaling and related phenomena in the edge region of tokamaks. Aimed at solving drift-reduced Braginskii equations [1] in complex magnetic geometry, the GBS is used for turbulence simulation in SOL region. In the recent upgrade, the simulation domain is expanded into close flux region with twist-shift boundary conditions. Hence, the new GBS code is able to explore global transport physics in an annular full-torus domain from the top of the pedestal into the far SOL. We are in the process of identifying and analyzing the linear and nonlinear instabilities in the system using the new GBS code. Preliminary results will be presented and compared with other codes if possible.[4pt] [1] A. Zeiler, J. F. Drake and B. Rogers, Phys. Plasmas 4, 2134 (1997)

  3. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  4. Using Modern C++ Idiom for the Discretisation of Sets of Coupled Transport Equations in Numerical Plasma Physics

    NASA Astrophysics Data System (ADS)

    van Dijk, Jan; Hartgers, Bart; van der Mullen, Joost

    2006-10-01

    Self-consistent modelling of plasma sources requires a simultaneous treatment of multiple physical phenomena. As a result plasma codes have a high degree of complexity. And with the growing interest in time-dependent modelling of non-equilibrium plasma in three dimensions, codes tend to become increasingly hard to explain-and-maintain. As a result of these trends there has been an increased interest in the software-engineering and implementation aspects of plasma modelling in our group at Eindhoven University of Technology. In this contribution we will present modern object-oriented techniques in C++ to solve an old problem: that of the discretisation of coupled linear(ized) equations involving multiple field variables on ortho-curvilinear meshes. The `LinSys' code has been tailored to the transport equations that occur in transport physics. The implementation has been made both efficient and user-friendly by using modern idiom like expression templates and template meta-programming. Live demonstrations will be given. The code is available to interested parties; please visit www.dischargemodelling.org.

  5. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  6. CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies

    NASA Astrophysics Data System (ADS)

    Delzanno, G.; Camporeale, E.; Moulton, J. D.; Borovsky, J. E.; MacDonald, E.; Thomsen, M. F.

    2012-12-01

    We present a recently developed Particle-In-Cell (PIC) code in curvilinear geometry called CPIC (Curvilinear PIC) [1], where the standard PIC algorithm is coupled with a grid generation/adaptation strategy. Through the grid generator, which maps the physical domain to a logical domain where the grid is uniform and Cartesian, the code can simulate domains of arbitrary complexity, including the interaction of complex objects with a plasma. At present the code is electrostatic. Poisson's equation (in logical space) can be solved with either an iterative method based on the Conjugate Gradient (CG) or the Generalized Minimal Residual (GMRES) coupled with a multigrid solver used as a preconditioner, or directly with multigrid. The multigrid strategy is critical for the solver to perform optimally or nearly optimally as the dimension of the problem increases. CPIC also features a hybrid particle mover, where the computational particles are characterized by position in logical space and velocity in physical space. The advantage of a hybrid mover, as opposed to more conventional movers that move particles directly in the physical space, is that the interpolation of the particles in logical space is straightforward and computationally inexpensive, since one does not have to track the position of the particle. We will present our latest progress on the development of the code and document the code performance on standard plasma-physics tests. Then we will present the (preliminary) application of the code to a basic dynamic-charging problem, namely the charging and shielding of a spherical spacecraft in a magnetized plasma for various level of magnetization and including the pulsed emission of an electron beam from the spacecraft. The dynamical evolution of the sheath and the time-dependent current collection will be described. This study is in support of the ConnEx mission concept to use an electron beam from a magnetospheric spacecraft to trace magnetic field lines from the magnetosphere to the ionosphere [2]. [1] G.L. Delzanno, E. Camporeale, "CPIC: a new Particle-in-Cell code for plasma-material interaction studies", in preparation (2012). [2] J.E. Borovsky, D.J. McComas, M.F. Thomsen, J.L. Burch, J. Cravens, C.J. Pollock, T.E. Moore, and S.B. Mende, "Magnetosphere-Ionosphere Observatory (MIO): A multisatellite mission designed to solve the problem of what generates auroral arcs," Eos. Trans. Amer. Geophys. Union 79 (45), F744 (2000).

  7. A generic framework for individual-based modelling and physical-biological interaction

    PubMed Central

    2018-01-01

    The increased availability of high-resolution ocean data globally has enabled more detailed analyses of physical-biological interactions and their consequences to the ecosystem. We present IBMlib, which is a versatile, portable and computationally effective framework for conducting Lagrangian simulations in the marine environment. The purpose of the framework is to handle complex individual-level biological models of organisms, combined with realistic 3D oceanographic model of physics and biogeochemistry describing the environment of the organisms without assumptions about spatial or temporal scales. The open-source framework features a minimal robust interface to facilitate the coupling between individual-level biological models and oceanographic models, and we provide application examples including forward/backward simulations, habitat connectivity calculations, assessing ocean conditions, comparison of physical circulation models, model ensemble runs and recently posterior Eulerian simulations using the IBMlib framework. We present the code design ideas behind the longevity of the code, our implementation experiences, as well as code performance benchmarking. The framework may contribute substantially to progresses in representing, understanding, predicting and eventually managing marine ecosystems. PMID:29351280

  8. Physics behind the mechanical nucleosome positioning code

    NASA Astrophysics Data System (ADS)

    Zuiddam, Martijn; Everaers, Ralf; Schiessel, Helmut

    2017-11-01

    The positions along DNA molecules of nucleosomes, the most abundant DNA-protein complexes in cells, are influenced by the sequence-dependent DNA mechanics and geometry. This leads to the "nucleosome positioning code", a preference of nucleosomes for certain sequence motives. Here we introduce a simplified model of the nucleosome where a coarse-grained DNA molecule is frozen into an idealized superhelical shape. We calculate the exact sequence preferences of our nucleosome model and find it to reproduce qualitatively all the main features known to influence nucleosome positions. Moreover, using well-controlled approximations to this model allows us to come to a detailed understanding of the physics behind the sequence preferences of nucleosomes.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patnaik, P. C.

    The SIGMET mesoscale meteorology simulation code represents an extension, in terms of physical modelling detail and numerical approach, of the work of Anthes (1972) and Anthes and Warner (1974). The code utilizes a finite difference technique to solve the so-called primitive equations which describe transient flow in the atmosphere. The SIGMET modelling contains all of the physics required to simulate the time dependent meteorology of a region with description of both the planetary boundary layer and upper level flow as they are affected by synoptic forcing and complex terrain. The mathematical formulation of the SIGMET model and the various physicalmore » effects incorporated into it are summarized.« less

  10. Design Considerations of a Virtual Laboratory for Advanced X-ray Sources

    NASA Astrophysics Data System (ADS)

    Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.

    2004-11-01

    The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.

  11. NTRFACE for MAGIC

    DTIC Science & Technology

    1989-07-31

    40. NO NO ACCESSION NO N7 ?I TITLE (inWijuod Security Claisification) NTRFACE FOR MAGIC 𔃼 PERSONAL AUTHOR(S) N.T. GLADD PE OF REPORT T b TIME...the MAGIC Particle-in-Cell Simulation Code. 19 ABSTRACT (Contianue on reverse if nceary and d ntiy by block number) The NTRFACE system was developed...made concret by applying it to a specific application- a mature, highly complex plasma physics particle in cell simulation code name MAGIC . This

  12. The physics of volume rendering

    NASA Astrophysics Data System (ADS)

    Peters, Thomas

    2014-11-01

    Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.

  13. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a methodmore » for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.« less

  14. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  15. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-08-20

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  16. An uncertainty analysis of the hydrogen source term for a station blackout accident in Sequoyah using MELCOR 1.8.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles

    2014-03-01

    A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less

  17. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  18. The next-generation ESL continuum gyrokinetic edge code

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.

    2009-05-01

    The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.

  19. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  20. Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software

    NASA Astrophysics Data System (ADS)

    Hellekson, Ron; Campbell, Scott

    1988-06-01

    Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.

  1. Computer modeling and simulation in inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrory, R.L.; Verdon, C.P.

    1989-03-01

    The complex hydrodynamic and transport processes associated with the implosion of an inertial confinement fusion (ICF) pellet place considerable demands on numerical simulation programs. Processes associated with implosion can usually be described using relatively simple models, but their complex interplay requires that programs model most of the relevant physical phenomena accurately. Most hydrodynamic codes used in ICF incorporate a one-fluid, two-temperature model. Electrons and ions are assumed to flow as one fluid (no charge separation). Due to the relatively weak coupling between the ions and electrons, each species is treated separately in terms of its temperature. In this paper wemore » describe some of the major components associated with an ICF hydrodynamics simulation code. To serve as an example we draw heavily on a two-dimensional Lagrangian hydrodynamic code (ORCHID) written at the University of Rochester's Laboratory for Laser Energetics. 46 refs., 19 figs., 1 tab.« less

  2. From model conception to verification and validation, a global approach to multiphase Navier-Stoke models with an emphasis on volcanic explosive phenomenology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dartevelle, Sebastian

    2007-10-01

    Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less

  3. Chemical and physical characterization of the first stages of protoplanetary disk formation

    NASA Astrophysics Data System (ADS)

    Hincelin, Ugo

    2012-12-01

    Low mass stars, like our Sun, are born from the collapse of a molecular cloud. The matter falls in the center of the cloud, creating a protoplanetary disk surrounding a protostar. Planets and other Solar System bodies will be formed in the disk. The chemical composition of the interstellar matter and its evolution during the formation of the disk are important to better understand the formation process of these objects. I studied the chemical and physical evolution of this matter, from the cloud to the disk, using the chemical gas-grain code Nautilus. A sensitivity study to some parameters of the code (such as elemental abundances and parameters of grain surface chemistry) has been done. More particularly, the updates of rate coefficients and branching ratios of the reactions of our chemical network showed their importance, such as on the abundances of some chemical species, and on the code sensitivity to others parameters. Several physical models of collapsing dense core have also been considered. The more complex and solid approach has been to interface our chemical code with the radiation-magneto-hydrodynamic model of stellar formation RAMSES, in order to model in three dimensions the physical and chemical evolution of a young disk formation. Our study showed that the disk keeps imprints of the past history of the matter, and so its chemical composition is sensitive to the initial conditions.

  4. An electrostatic Particle-In-Cell code on multi-block structured meshes

    NASA Astrophysics Data System (ADS)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David

    2017-12-01

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.

  5. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE PAGES

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...

    2017-09-14

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  6. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  7. A preliminary Monte Carlo study for the treatment head of a carbon-ion radiotherapy facility using TOPAS

    NASA Astrophysics Data System (ADS)

    Liu, Hongdong; Zhang, Lian; Chen, Zhi; Liu, Xinguo; Dai, Zhongying; Li, Qiang; Xu, Xie George

    2017-09-01

    In medical physics it is desirable to have a Monte Carlo code that is less complex, reliable yet flexible for dose verification, optimization, and component design. TOPAS is a newly developed Monte Carlo simulation tool which combines extensive radiation physics libraries available in Geant4 code, easyto-use geometry and support for visualization. Although TOPAS has been widely tested and verified in simulations of proton therapy, there has been no reported application for carbon ion therapy. To evaluate the feasibility and accuracy of TOPAS simulations for carbon ion therapy, a licensed TOPAS code (version 3_0_p1) was used to carry out a dosimetric study of therapeutic carbon ions. Results of depth dose profile based on different physics models have been obtained and compared with the measurements. It is found that the G4QMD model is at least as accurate as the TOPAS default BIC physics model for carbon ions, but when the energy is increased to relatively high levels such as 400 MeV/u, the G4QMD model shows preferable performance. Also, simulations of special components used in the treatment head at the Institute of Modern Physics facility was conducted to investigate the Spread-Out dose distribution in water. The physical dose in water of SOBP was found to be consistent with the aim of the 6 cm ridge filter.

  8. CAreDroid: Adaptation Framework for Android Context-Aware Applications

    PubMed Central

    Elmalaki, Salma; Wanner, Lucas; Srivastava, Mani

    2015-01-01

    Context-awareness is the ability of software systems to sense and adapt to their physical environment. Many contemporary mobile applications adapt to changing locations, connectivity states, available computational and energy resources, and proximity to other users and devices. Nevertheless, there is little systematic support for context-awareness in contemporary mobile operating systems. Because of this, application developers must build their own context-awareness adaptation engines, dealing directly with sensors and polluting application code with complex adaptation decisions. In this paper, we introduce CAreDroid, which is a framework that is designed to decouple the application logic from the complex adaptation decisions in Android context-aware applications. In this framework, developers are required— only—to focus on the application logic by providing a list of methods that are sensitive to certain contexts along with the permissible operating ranges under those contexts. At run time, CAreDroid monitors the context of the physical environment and intercepts calls to sensitive methods, activating only the blocks of code that best fit the current physical context. CAreDroid is implemented as part of the Android runtime system. By pushing context monitoring and adaptation into the runtime system, CAreDroid eases the development of context-aware applications and increases their efficiency. In particular, case study applications implemented using CAre-Droid are shown to have: (1) at least half lines of code fewer and (2) at least 10× more efficient in execution time compared to equivalent context-aware applications that use only standard Android APIs. PMID:26834512

  9. CAreDroid: Adaptation Framework for Android Context-Aware Applications.

    PubMed

    Elmalaki, Salma; Wanner, Lucas; Srivastava, Mani

    2015-09-01

    Context-awareness is the ability of software systems to sense and adapt to their physical environment. Many contemporary mobile applications adapt to changing locations, connectivity states, available computational and energy resources, and proximity to other users and devices. Nevertheless, there is little systematic support for context-awareness in contemporary mobile operating systems. Because of this, application developers must build their own context-awareness adaptation engines, dealing directly with sensors and polluting application code with complex adaptation decisions. In this paper, we introduce CAreDroid, which is a framework that is designed to decouple the application logic from the complex adaptation decisions in Android context-aware applications. In this framework, developers are required- only-to focus on the application logic by providing a list of methods that are sensitive to certain contexts along with the permissible operating ranges under those contexts. At run time, CAreDroid monitors the context of the physical environment and intercepts calls to sensitive methods, activating only the blocks of code that best fit the current physical context. CAreDroid is implemented as part of the Android runtime system. By pushing context monitoring and adaptation into the runtime system, CAreDroid eases the development of context-aware applications and increases their efficiency. In particular, case study applications implemented using CAre-Droid are shown to have: (1) at least half lines of code fewer and (2) at least 10× more efficient in execution time compared to equivalent context-aware applications that use only standard Android APIs.

  10. The role of CFD in the design process

    NASA Astrophysics Data System (ADS)

    Jennions, Ian K.

    1994-05-01

    Over the last decade the role played by CFD codes in turbomachinery design has changed remarkably. While convergence/stability or even the existence of unique solutions was discussed fervently ten years ago, CFD codes now form a valuable part of an overall integrated design system and have caused us to re-think much of what we do. The geometric and physical complexities addressed have also evolved, as have the number of software houses competing with in-house developers to provide solutions to daily design problems. This paper reviews how GE Aircraft Engines (GEAE) uses CFD in the turbomachinery design process and examines many of the issues faced in successful code implementation.

  11. OBPR Product Lines, Human Research Initiative, and Physics Roadmap for Exploration

    NASA Technical Reports Server (NTRS)

    Israelsson, Ulf

    2004-01-01

    The pace of change has increased at NASA. OBPR s focus is now on the Human interface as it relates to the new Exploration vision. The fundamental physics community must demonstrate how we can contribute. Many opportunities exist for physicists to participate in addressing NASA's cross-disciplinary exploration challenges: a) Physicists can contribute to elucidating basic operating principles for complex biological systems; b) Physics technologies can contribute to developing miniature sensors and systems required for manned missions to Mars. NASA Codes other than OBPR may be viable sources of funding for physics research.

  12. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  13. Spacecraft-plasma interaction codes: NASCAP/GEO, NASCAP/LEO, POLAR, DynaPAC, and EPSAT

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Jongeward, G. A.; Cooke, D. L.

    1992-01-01

    Development of a computer code to simulate interactions between the surfaces of a geometrically complex spacecraft and the space plasma environment involves: (1) defining the relevant physical phenomena and formulating them in appropriate levels of approximation; (2) defining a representation for the 3-D space external to the spacecraft and a means for defining the spacecraft surface geometry and embedding it in the surrounding space; (3) packaging the code so that it is easy and practical to use, interpret, and present the results; and (4) validating the code by continual comparison with theoretical models, ground test data, and spaceflight experiments. The physical content, geometrical capabilities, and application of five S-CUBED developed spacecraft plasma interaction codes are discussed. The NASA Charging Analyzer Program/geosynchronous earth orbit (NASCAP/GEO) is used to illustrate the role of electrostatic barrier formation in daylight spacecraft charging. NASCAP/low Earth orbit (LEO) applications to the CHARGE-2 and Space Power Experiment Aboard Rockets (SPEAR)-1 rocket payloads are shown. DynaPAC application to the SPEAR-2 rocket payloads is described. Environment Power System Analysis Tool (EPSAT) is illustrated by application to Tethered Satellite System 1 (TSS-1), SPEAR-3, and Sundance. A detailed description and application of the Potentials of Large Objects in the Auroral Region (POLAR) Code are presented.

  14. Transonic Navier-Stokes wing solutions using a zonal approach. Part 2: High angle-of-attack simulation

    NASA Technical Reports Server (NTRS)

    Chaderjian, N. M.

    1986-01-01

    A computer code is under development whereby the thin-layer Reynolds-averaged Navier-Stokes equations are to be applied to realistic fighter-aircraft configurations. This transonic Navier-Stokes code (TNS) utilizes a zonal approach in order to treat complex geometries and satisfy in-core computer memory constraints. The zonal approach has been applied to isolated wing geometries in order to facilitate code development. Part 1 of this paper addresses the TNS finite-difference algorithm, zonal methodology, and code validation with experimental data. Part 2 of this paper addresses some numerical issues such as code robustness, efficiency, and accuracy at high angles of attack. Special free-stream-preserving metrics proved an effective way to treat H-mesh singularities over a large range of severe flow conditions, including strong leading-edge flow gradients, massive shock-induced separation, and stall. Furthermore, lift and drag coefficients have been computed for a wing up through CLmax. Numerical oil flow patterns and particle trajectories are presented both for subcritical and transonic flow. These flow simulations are rich with complex separated flow physics and demonstrate the efficiency and robustness of the zonal approach.

  15. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  16. The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.

    PubMed

    Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A

    2018-01-01

    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.

  17. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  18. Introduction to the internal fluid mechanics research session

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Povinelli, Louis A.

    1990-01-01

    Internal fluid mechanics research at LeRC is directed toward an improved understanding of the important flow physics affecting aerospace propulsion systems, and applying this improved understanding to formulate accurate predictive codes. To this end, research is conducted involving detailed experimentation and analysis. The following three papers summarize ongoing work and indicate future emphasis in three major research thrusts: inlets, ducts, and nozzles; turbomachinery; and chemical reacting flows. The underlying goal of the research in each of these areas is to bring internal computational fluid mechanic to a state of practical application for aerospace propulsion systems. Achievement of this goal requires that carefully planned and executed experiments be conducted in order to develop and validate useful codes. It is critical that numerical code development work and experimental work be closely coupled. The insights gained are represented by mathematical models that form the basis for code development. The resultant codes are then tested by comparing them with appropriate experiments in order to ensure their validity and determine their applicable range. The ultimate user community must be a part of this process to assure relevancy of the work and to hasten its practical application. Propulsion systems are characterized by highly complex and dynamic internal flows. Many complex, 3-D flow phenomena may be present, including unsteadiness, shocks, and chemical reactions. By focusing on specific portions of a propulsion system, it is often possible to identify the dominant phenomena that must be understood and modeled for obtaining accurate predictive capability. The three major research thrusts serve as a focus leading to greater understanding of the relevant physics and to an improvement in analytic tools. This in turn will hasten continued advancements in propulsion system performance and capability.

  19. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  20. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  1. Assessment of Spacecraft Systems Integration Using the Electric Propulsion Interactions Code (EPIC)

    NASA Technical Reports Server (NTRS)

    Mikellides, Ioannis G.; Kuharski, Robert A.; Mandell, Myron J.; Gardner, Barbara M.; Kauffman, William J. (Technical Monitor)

    2002-01-01

    SAIC is currently developing the Electric Propulsion Interactions Code 'EPIC', an interactive computer tool that allows the construction of a 3-D spacecraft model, and the assessment of interactions between its subsystems and the plume from an electric thruster. EPIC unites different computer tools to address the complexity associated with the interaction processes. This paper describes the overall architecture and capability of EPIC including the physics and algorithms that comprise its various components. Results from selected modeling efforts of different spacecraft-thruster systems are also presented.

  2. Reactive transport simulation via combination of a multiphase-capable transport code for unstructured meshes with a Gibbs energy minimization solver of geochemical equilibria

    NASA Astrophysics Data System (ADS)

    Fowler, S. J.; Driesner, T.; Hingerl, F. F.; Kulik, D. A.; Wagner, T.

    2011-12-01

    We apply a new, C++-based computational model for hydrothermal fluid-rock interaction and scale formation in geothermal reservoirs. The model couples the Complex System Modelling Platform (CSMP++) code for fluid flow in porous and fractured media (Matthai et al., 2007) with the Gibbs energy minimization numerical kernel GEMS3K of the GEM-Selektor (GEMS3) geochemical modelling package (Kulik et al., 2010) in a modular fashion. CSMP++ includes interfaces to commercial file formats, accommodating complex geometry construction using CAD (Rhinoceros) and meshing (ANSYS) software. The CSMP++ approach employs finite element-finite volume spatial discretization, implicit or explicit time discretization, and operator splitting. GEMS3K can calculate complex fluid-mineral equilibria based on a variety of equation of state and activity models. A selection of multi-electrolyte aqueous solution models, such as extended Debye-Huckel, Pitzer (Harvie et al., 1984), EUNIQUAC (Thomsen et al., 1996), and the new ELVIS model (Hingerl et al., this conference), makes it well-suited for application to a wide range of geothermal conditions. An advantage of the GEMS3K solver is simultaneous consideration of complex solid solutions (e.g., clay minerals), gases, fluids, and aqueous solutions. Each coupled simulation results in a thermodynamically-based description of the geochemical and physical state of a hydrothermal system evolving along a complex P-T-X path. The code design allows efficient, flexible incorporation of numerical and thermodynamic database improvements. We demonstrate the coupled code workflow and applicability to compositionally and physically complex natural systems relevant to enhanced geothermal systems, where temporally and spatially varying chemical interactions may take place within diverse lithologies of varying geometry. Engesgaard, P. & Kipp, K. L. (1992). Water Res. Res. 28: 2829-2843. Harvie, C. E.; Møller, N. & Weare, J. H. (1984). Geochim. Cosmochim. Acta 48: 723-751. Kulik, D. A., Wagner, T., Dmytrieva S. V, et al. (2010). GEM-Selektor home page, Paul Scherrer Institut. Available at http://gems.web.psi.ch. Matthäi, S. K., Geiger, S., Roberts, S. G., Paluszny, A., Belayneh, M., Burri, A., Mezentsev, A., Lu, H., Coumou, D., Driesner, T. & Heinrich C. A. (2007). Geol. Soc. London, Spec. Publ. 292: 405-429. Thomsen, K. Rasmussen, P. & Gani, R. (1996). Chem. Eng. Sci. 51: 3675-3683.

  3. A review of high-speed, convective, heat-transfer computation methods

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.

    1989-01-01

    The objective of this report is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. A discussion is given of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Also discussed are a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.

  4. A review of high-speed, convective, heat-transfer computation methods

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.

    1989-01-01

    The objective is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. Given first is a discussion of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Next is a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.

  5. Time Dependent Data Mining in RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cogliati, Joshua Joseph; Chen, Jun; Patel, Japan Ketan

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The goal of this type of analyses is to understand the response of such systems in particular with respect their probabilistic behavior, to understand their predictability and drivers or lack of thereof. Data mining capabilities are the cornerstones to perform such deep learning of system responses. For this reason static data mining capabilities were added last fiscal year (FY 15). In real applications, when dealing with complex multi-scale, multi-physics systems it seems natural that, during transients, the relevance of themore » different scales, and physics, would evolve over time. For these reasons the data mining capabilities have been extended allowing their application over time. In this writing it is reported a description of the new RAVEN capabilities implemented with several simple analytical tests to explain their application and highlight the proper implementation. The report concludes with the application of those newly implemented capabilities to the analysis of a simulation performed with the Bison code.« less

  6. Branson: A Mini-App for Studying Parallel IMC, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Alex

    This code solves the gray thermal radiative transfer (TRT) equations in parallel using simple opacities and Cartesian meshes. Although Branson solves the TRT equations it is not designed to model radiation transport: Branson contains simple physics and does not have a multigroup treatment, nor can it use physical material data. The opacities have are simple polynomials in temperature there is a limited ability to specify complex geometries and sources. Branson was designed only to capture the computational demands of production IMC codes, especially in large parallel runs. It was also intended to foster collaboration with vendors, universities and other DOEmore » partners. Branson is similar in character to the neutron transport proxy-app Quicksilver from LLNL, which was recently open-sourced.« less

  7. Assessment of Hybrid RANS/LES Turbulence Models for Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.

    2010-01-01

    Predicting the noise from aircraft with exposed landing gear remains a challenging problem for the aeroacoustics community. Although computational fluid dynamics (CFD) has shown promise as a technique that could produce high-fidelity flow solutions, generating grids that can resolve the pertinent physics around complex configurations can be very challenging. Structured grids are often impractical for such configurations. Unstructured grids offer a path forward for simulating complex configurations. However, few unstructured grid codes have been thoroughly tested for unsteady flow problems in the manner needed for aeroacoustic prediction. A widely used unstructured grid code, FUN3D, is examined for resolving the near field in unsteady flow problems. Although the ultimate goal is to compute the flow around complex geometries such as the landing gear, simpler problems that include some of the relevant physics, and are easily amenable to the structured grid approaches are used for testing the unstructured grid approach. The test cases chosen for this study correspond to the experimental work on single and tandem cylinders conducted in the Basic Aerodynamic Research Tunnel (BART) and the Quiet Flow Facility (QFF) at NASA Langley Research Center. These configurations offer an excellent opportunity to assess the performance of hybrid RANS/LES turbulence models that transition from RANS in unresolved regions near solid bodies to LES in the outer flow field. Several of these models have been implemented and tested in both structured and unstructured grid codes to evaluate their dependence on the solver and mesh type. Comparison of FUN3D solutions with experimental data and numerical solutions from a structured grid flow solver are found to be encouraging.

  8. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  9. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  10. Improved design of subcritical and supercritical cascades using complex characteristics and boundary layer correction

    NASA Technical Reports Server (NTRS)

    Sanz, J. M.

    1983-01-01

    The method of complex characteristics and hodograph transformation for the design of shockless airfoils was extended to design supercritical cascades with high solidities and large inlet angles. This capability was achieved by introducing a conformal mapping of the hodograph domain onto an ellipse and expanding the solution in terms of Tchebycheff polynomials. A computer code was developd based on this idea. A number of airfoils designed with the code are presented. Various supercritical and subcritical compressor, turbine and propeller sections are shown. The lag-entrainment method for the calculation of a turbulent boundary layer was incorporated to the inviscid design code. The results of this calculation are shown for the airfoils described. The elliptic conformal transformation developed to map the hodograph domain onto an ellipse can be used to generate a conformal grid in the physical domain of a cascade of airfoils with open trailing edges with a single transformation. A grid generated with this transformation is shown for the Korn airfoil.

  11. High-frequency CAD-based scattering model: SERMAT

    NASA Astrophysics Data System (ADS)

    Goupil, D.; Boutillier, M.

    1991-09-01

    Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.

  12. Porting of EPICS to Real Time UNIX, and Usage Ported EPICS for FEL Automation

    NASA Astrophysics Data System (ADS)

    Salikova, Tatiana

    This article describes concepts and mechanisms used in porting of EPICS (Experimental Physical and Industrial Control System) codes to platform of operating system UNIX. Without destruction of EPICS architecture, new features of EPICS provides the support for real time operating system LynxOS/x86 and equipment produced by INP (Budker Institute of Nuclear Physics). Application of ported EPICS reduces the cost of software and hardware is used for automation of FEL (Free Electron Laser) complex.

  13. On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Gamal M.

    Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.

  14. Developing and Implementing the Data Mining Algorithms in RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less

  15. Test case for VVER-1000 complex modeling using MCU and ATHLET

    NASA Astrophysics Data System (ADS)

    Bahdanovich, R. B.; Bogdanova, E. V.; Gamtsemlidze, I. D.; Nikonov, S. P.; Tikhomirov, G. V.

    2017-01-01

    The correct modeling of processes occurring in the fuel core of the reactor is very important. In the design and operation of nuclear reactors it is necessary to cover the entire range of reactor physics. Very often the calculations are carried out within the framework of only one domain, for example, in the framework of structural analysis, neutronics (NT) or thermal hydraulics (TH). However, this is not always correct, as the impact of related physical processes occurring simultaneously, could be significant. Therefore it is recommended to spend the coupled calculations. The paper provides test case for the coupled neutronics-thermal hydraulics calculation of VVER-1000 using the precise neutron code MCU and system engineering code ATHLET. The model is based on the fuel assembly (type 2M). Test case for calculation of power distribution, fuel and coolant temperature, coolant density, etc. has been developed. It is assumed that the test case will be used for simulation of VVER-1000 reactor and in the calculation using other programs, for example, for codes cross-verification. The detailed description of the codes (MCU, ATHLET), geometry and material composition of the model and an iterative calculation scheme is given in the paper. Script in PERL language was written to couple the codes.

  16. A test harness for accelerating physics parameterization advancements into operations

    NASA Astrophysics Data System (ADS)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.

  17. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  18. Knowledge-Base Semantic Gap Analysis for the Vulnerability Detection

    NASA Astrophysics Data System (ADS)

    Wu, Raymond; Seki, Keisuke; Sakamoto, Ryusuke; Hisada, Masayuki

    Web security became an alert in internet computing. To cope with ever-rising security complexity, semantic analysis is proposed to fill-in the gap that the current approaches fail to commit. Conventional methods limit their focus to the physical source codes instead of the abstraction of semantics. It bypasses new types of vulnerability and causes tremendous business loss.

  19. A Steady State and Quasi-Steady Interface Between the Generalized Fluid System Simulation Program and the SINDA/G Thermal Analysis Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok; Tiller, Bruce

    2001-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasisteady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  20. Determination of the Complex Elastic Moduli of Materials Using A Free- Free Bar Technique

    DTIC Science & Technology

    1994-03-01

    LCDR, Dr. D. L alner 1 Dept. of Physics (Code PH) Naval Postgrdumt School Monterey, CA 93943 10 . Cgimd Ofric. Naval Reserh Labrary ATI’N: Dr. N...Postgraduate School ( fAp~hicbl) Naval Postgraduate School 6c. ADDRESS MclY- $Mae, 02t c7;w 3 7b. ADDRESS (city, MUk and ii? code) Mogntere CA 93943-5=0...Monery CA 93943-5000 Sa. NAME OF FUNDDOISIONS0R]NG G. OFMiCE SYMBOL 9. FUU1ENIICMN IDTFCTONNUMBER ORGANUAT1ON j (If Appkiabie) NSWC. CRANE. NPS

  1. A numerical code for a three-dimensional magnetospheric MHD equilibrium model

    NASA Technical Reports Server (NTRS)

    Voigt, G.-H.

    1992-01-01

    Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.

  2. Physical interactions between bacteriophage and Escherichia coli proteins required for initiation of lambda DNA replication.

    PubMed

    Liberek, K; Osipiuk, J; Zylicz, M; Ang, D; Skorko, J; Georgopoulos, C

    1990-02-25

    The process of initiation of lambda DNA replication requires the assembly of the proper nucleoprotein complex at the origin of replication, ori lambda. The complex is composed of both phage and host-coded proteins. The lambda O initiator protein binds specifically to ori lambda. The lambda P initiator protein binds to both lambda O and the host-coded dnaB helicase, giving rise to an ori lambda DNA.lambda O.lambda P.dnaB structure. The dnaK and dnaJ heat shock proteins have been shown capable of dissociating this complex. The thus freed dnaB helicase unwinds the duplex DNA template at the replication fork. In this report, through cross-linking, size chromatography, and protein affinity chromatography, we document some of the protein-protein interactions occurring at ori lambda. Our results show that the dnaK protein specifically interacts with both lambda O and lambda P, and that the dnaJ protein specifically interacts with the dnaB helicase.

  3. Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2017-07-08

    Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.

  4. Experimental aerothermodynamic research of hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Cleary, Joseph W.

    1987-01-01

    The 2-D and 3-D advance computer codes being developed for use in the design of such hypersonic aircraft as the National Aero-Space Plane require comparison of the computational results with a broad spectrum of experimental data to fully assess the validity of the codes. This is particularly true for complex flow fields with control surfaces present and for flows with separation, such as leeside flow. Therefore, the objective is to provide a hypersonic experimental data base required for validation of advanced computational fluid dynamics (CFD) computer codes and for development of more thorough understanding of the flow physics necessary for these codes. This is being done by implementing a comprehensive test program for a generic all-body hypersonic aircraft model in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel over a broad range of test conditions to obtain pertinent surface and flowfield data. Results from the flow visualization portion of the investigation are presented.

  5. Improved Algorithms Speed It Up for Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less

  6. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  7. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  8. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  9. Aerodynamic simulation on massively parallel systems

    NASA Technical Reports Server (NTRS)

    Haeuser, Jochem; Simon, Horst D.

    1992-01-01

    This paper briefly addresses the computational requirements for the analysis of complete configurations of aircraft and spacecraft currently under design to be used for advanced transportation in commercial applications as well as in space flight. The discussion clearly shows that massively parallel systems are the only alternative which is both cost effective and on the other hand can provide the necessary TeraFlops, needed to satisfy the narrow design margins of modern vehicles. It is assumed that the solution of the governing physical equations, i.e., the Navier-Stokes equations which may be complemented by chemistry and turbulence models, is done on multiblock grids. This technique is situated between the fully structured approach of classical boundary fitted grids and the fully unstructured tetrahedra grids. A fully structured grid best represents the flow physics, while the unstructured grid gives best geometrical flexibility. The multiblock grid employed is structured within a block, but completely unstructured on the block level. While a completely unstructured grid is not straightforward to parallelize, the above mentioned multiblock grid is inherently parallel, in particular for multiple instruction multiple datastream (MIMD) machines. In this paper guidelines are provided for setting up or modifying an existing sequential code so that a direct parallelization on a massively parallel system is possible. Results are presented for three parallel systems, namely the Intel hypercube, the Ncube hypercube, and the FPS 500 system. Some preliminary results for an 8K CM2 machine will also be mentioned. The code run is the two dimensional grid generation module of Grid, which is a general two dimensional and three dimensional grid generation code for complex geometries. A system of nonlinear Poisson equations is solved. This code is also a good testcase for complex fluid dynamics codes, since the same datastructures are used. All systems provided good speedups, but message passing MIMD systems seem to be best suited for large miltiblock applications.

  10. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  11. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, J.M.; Winkelmann, F.

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of thesemore » methods to solving the partial differential equations for two-dimensional heat flow is illustrated.« less

  12. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  13. Temporal parallelization of edge plasma simulations using the parareal algorithm and the SOLPS code

    DOE PAGES

    Samaddar, Debasmita; Coster, D. P.; Bonnin, X.; ...

    2017-07-31

    We show that numerical modelling of edge plasma physics may be successfully parallelized in time. The parareal algorithm has been employed for this purpose and the SOLPS code package coupling the B2.5 finite-volume fluid plasma solver with the kinetic Monte-Carlo neutral code Eirene has been used as a test bed. The complex dynamics of the plasma and neutrals in the scrape-off layer (SOL) region makes this a unique application. It is demonstrated that a significant computational gain (more than an order of magnitude) may be obtained with this technique. The use of the IPS framework for event-based parareal implementation optimizesmore » resource utilization and has been shown to significantly contribute to the computational gain.« less

  14. Interfacing a General Purpose Fluid Network Flow Program with the SINDA/G Thermal Analysis Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Popok, Daniel

    1999-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program Systems Improved Numerical Differencing Analyzer/Gaski (SINDA/G). The flow code, Generalized Fluid System Simulation Program (GFSSP), is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasi-steady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  15. Tinamit: Making coupled system dynamics models accessible to stakeholders

    NASA Astrophysics Data System (ADS)

    Malard, Julien; Inam Baig, Azhar; Rojas Díaz, Marcela; Hassanzadeh, Elmira; Adamowski, Jan; Tuy, Héctor; Melgar-Quiñonez, Hugo

    2017-04-01

    Model coupling is increasingly used as a method of combining the best of two models when representing socio-environmental systems, though barriers to successful model adoption by stakeholders are particularly present with the use of coupled models, due to their high complexity and typically low implementation flexibility. Coupled system dynamics - physically-based modelling is a promising method to improve stakeholder participation in environmental modelling while retaining a high level of complexity for physical process representation, as the system dynamics components are readily understandable and can be built by stakeholders themselves. However, this method is not without limitations in practice, including 1) inflexible and complicated coupling methods, 2) difficult model maintenance after the end of the project, and 3) a wide variety of end-user cultures and languages. We have developed the open-source Python-language software tool Tinamit to overcome some of these limitations to the adoption of stakeholder-based coupled system dynamics - physically-based modelling. The software is unique in 1) its inclusion of both a graphical user interface (GUI) and a library of available commands (API) that allow users with little or no coding abilities to rapidly, effectively, and flexibly couple models, 2) its multilingual support for the GUI, allowing users to couple models in their preferred language (and to add new languages as necessary for their community work), and 3) its modular structure allowing for very easy model coupling and modification without the direct use of code, and to which programming-savvy users can easily add support for new types of physically-based models. We discuss how the use of Tinamit for model coupling can greatly increase the accessibility of coupled models to stakeholders, using an example of a stakeholder-built system dynamics model of soil salinity issues in Pakistan coupled with the physically-based soil salinity and water flow model SAHYSMOD. Different socioeconomic and environmental policies for soil salinity remediation are tested within the coupled model, allowing for the identification of the most efficient actions from an environmental and a farmer economy standpoint while taking into account the complex feedbacks between socioeconomics and the physical environment.

  16. Benchmarking Geant4 for simulating galactic cosmic ray interactions within planetary bodies

    DOE PAGES

    Mesick, K. E.; Feldman, W. C.; Coupland, D. D. S.; ...

    2018-06-20

    Galactic cosmic rays undergo complex nuclear interactions with nuclei within planetary bodies that have little to no atmosphere. Radiation transport simulations are a key tool used in understanding the neutron and gamma-ray albedo coming from these interactions and tracing these signals back to geochemical composition of the target. In this paper, we study the validity of the code Geant4 for simulating such interactions by comparing simulation results to data from the Apollo 17 Lunar Neutron Probe Experiment. Different assumptions regarding the physics are explored to demonstrate how these impact the Geant4 simulation results. In general, all of the Geant4 resultsmore » over-predict the data, however, certain physics lists perform better than others. Finally, in addition, we show that results from the radiation transport code MCNP6 are similar to those obtained using Geant4.« less

  17. Benchmarking Geant4 for simulating galactic cosmic ray interactions within planetary bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesick, K. E.; Feldman, W. C.; Coupland, D. D. S.

    Galactic cosmic rays undergo complex nuclear interactions with nuclei within planetary bodies that have little to no atmosphere. Radiation transport simulations are a key tool used in understanding the neutron and gamma-ray albedo coming from these interactions and tracing these signals back to geochemical composition of the target. In this paper, we study the validity of the code Geant4 for simulating such interactions by comparing simulation results to data from the Apollo 17 Lunar Neutron Probe Experiment. Different assumptions regarding the physics are explored to demonstrate how these impact the Geant4 simulation results. In general, all of the Geant4 resultsmore » over-predict the data, however, certain physics lists perform better than others. Finally, in addition, we show that results from the radiation transport code MCNP6 are similar to those obtained using Geant4.« less

  18. Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.

    1999-09-01

    A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less

  19. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  20. Perspectives in numerical astrophysics:

    NASA Astrophysics Data System (ADS)

    Reverdy, V.

    2016-12-01

    In this discussion paper, we investigate the current and future status of numerical astrophysics and highlight key questions concerning the transition to the exascale era. We first discuss the fact that one of the main motivation behind high performance simulations should not be the reproduction of observational or experimental data, but the understanding of the emergence of complexity from fundamental laws. This motivation is put into perspective regarding the quest for more computational power and we argue that extra computational resources can be used to gain in abstraction. Then, the readiness level of present-day simulation codes in regard to upcoming exascale architecture is examined and two major challenges are raised concerning both the central role of data movement for performances and the growing complexity of codes. Software architecture is finally presented as a key component to make the most of upcoming architectures while solving original physics problems.

  1. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  2. Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jernigan, Dann A.; Blanchat, Thomas K.

    It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less

  3. One ring to rule them all: storm time ring current and its influence on radiation belts, plasmasphere and global magnetosphere electrodynamics

    NASA Astrophysics Data System (ADS)

    Buzulukova, Natalia; Fok, Mei-Ching; Glocer, Alex; Moore, Thomas E.

    2013-04-01

    We report studies of the storm time ring current and its influence on the radiation belts, plasmasphere and global magnetospheric dynamics. The near-Earth space environment is described by multiscale physics that reflects a variety of processes and conditions that occur in magnetospheric plasma. For a successful description of such a plasma, a complex solution is needed which allows multiple physics domains to be described using multiple physical models. A key population of the inner magnetosphere is ring current plasma. Ring current dynamics affects magnetic and electric fields in the entire magnetosphere, the distribution of cold ionospheric plasma (plasmasphere), and radiation belts particles. To study electrodynamics of the inner magnetosphere, we present a MHD model (BATSRUS code) coupled with ionospheric solver for electric field and with ring current-radiation belt model (CIMI code). The model will be used as a tool to reveal details of coupling between different regions of the Earth's magnetosphere. A model validation will be also presented based on comparison with data from THEMIS, POLAR, GOES, and TWINS missions. INVITED TALK

  4. Computer aided design of extrusion forming tools for complex geometry profiles

    NASA Astrophysics Data System (ADS)

    Goncalves, Nelson Daniel Ferreira

    In the profile extrusion, the experience of the die designer is crucial for obtaining good results. In industry, it is quite usual the need of several experimental trials for a specific extrusion die before a balanced flow distribution is obtained. This experimental based trial-and-error procedure is time and money consuming, but, it works, and most of the profile extrusion companies rely on such method. However, the competition is forcing the industry to look for more effective procedures and the design of profile extrusion dies is not an exception. For this purpose, computer aided design seems to be a good route. Nowadays, the available computational rheology numerical codes allow the simulation of complex fluid flows. This permits the die designer to evaluate and to optimize the flow channel, without the need to have a physical die and to perform real extrusion trials. In this work, a finite volume based numerical code was developed, for the simulation of non-Newtonian (inelastic) fluid and non-isothermal flows using unstructured meshes. The developed code is able to model the forming and cooling stages of profile extrusion, and can be used to aid the design of forming tools used in the production of complex profiles. For the code verification three benchmark problems were tested: flow between parallel plates, flow around a cylinder, and the lid driven cavity flow. The code was employed to design two extrusion dies to produce complex cross section profiles: a medical catheter die and a wood plastic composite profile for decking applications. The last was experimentally validated. Simple extrusion dies used to produced L and T shaped profiles were studied in detail, allowing a better understanding of the effect of the main geometry parameters on the flow distribution. To model the cooling stage a new implicit formulation was devised, which allowed the achievement of better convergence rates and thus the reduction of the computation times. Having in mind the solution of large dimension problems, the code was parallelized using graphics processing units (GPUs). Speedups of ten times could be obtained, drastically decreasing the time required to obtain results.

  5. Simulation of Laser Cooling and Trapping in Engineering Applications

    NASA Technical Reports Server (NTRS)

    Ramirez-Serrano, Jaime; Kohel, James; Thompson, Robert; Yu, Nan; Lunblad, Nathan

    2005-01-01

    An advanced computer code is undergoing development for numerically simulating laser cooling and trapping of large numbers of atoms. The code is expected to be useful in practical engineering applications and to contribute to understanding of the roles that light, atomic collisions, background pressure, and numbers of particles play in experiments using laser-cooled and -trapped atoms. The code is based on semiclassical theories of the forces exerted on atoms by magnetic and optical fields. Whereas computer codes developed previously for the same purpose account for only a few physical mechanisms, this code incorporates many more physical mechanisms (including atomic collisions, sub-Doppler cooling mechanisms, Stark and Zeeman energy shifts, gravitation, and evanescent-wave phenomena) that affect laser-matter interactions and the cooling of atoms to submillikelvin temperatures. Moreover, whereas the prior codes can simulate the interactions of at most a few atoms with a resonant light field, the number of atoms that can be included in a simulation by the present code is limited only by computer memory. Hence, the present code represents more nearly completely the complex physics involved when using laser-cooled and -trapped atoms in engineering applications. Another advantage that the code incorporates is the possibility to analyze the interaction between cold atoms of different atomic number. Some properties that cold atoms of different atomic species have, like cross sections and the particular excited states they can occupy when interacting with each other and light fields, play important roles not yet completely understood in the new experiments that are under way in laboratories worldwide to form ultracold molecules. Other research efforts use cold atoms as holders of quantum information, and more recent developments in cavity quantum electrodynamics also use ultracold atoms to explore and expand new information-technology ideas. These experiments give a hint on the wide range of applications and technology developments that can be tackled using cold atoms and light fields. From more precise atomic clocks and gravity sensors to the development of quantum computers, there will be a need to completely understand the whole ensemble of physical mechanisms that play a role in the development of such technologies. The code also permits the study of the dynamic and steady-state operations of technologies that use cold atoms. The physical characteristics of lasers and fields can be time-controlled to give a realistic simulation of the processes involved such that the design process can determine the best control features to use. It is expected that with the features incorporated into the code it will become a tool for the useful application of ultracold atoms in engineering applications. Currently, the software is being used for the analysis and understanding of simple experiments using cold atoms, and for the design of a modular compact source of cold atoms to be used in future research and development projects. The results so far indicate that the code is a useful design instrument that shows good agreement with experimental measurements (see figure), and a Windows-based user-friendly interface is also under development.

  6. PyNeb: a new tool for analyzing emission lines. I. Code description and validation of results

    NASA Astrophysics Data System (ADS)

    Luridiana, V.; Morisset, C.; Shaw, R. A.

    2015-01-01

    Analysis of emission lines in gaseous nebulae yields direct measures of physical conditions and chemical abundances and is the cornerstone of nebular astrophysics. Although the physical problem is conceptually simple, its practical complexity can be overwhelming since the amount of data to be analyzed steadily increases; furthermore, results depend crucially on the input atomic data, whose determination also improves each year. To address these challenges we created PyNeb, an innovative code for analyzing emission lines. PyNeb computes physical conditions and ionic and elemental abundances and produces both theoretical and observational diagnostic plots. It is designed to be portable, modular, and largely customizable in aspects such as the atomic data used, the format of the observational data to be analyzed, and the graphical output. It gives full access to the intermediate quantities of the calculation, making it possible to write scripts tailored to the specific type of analysis one wants to carry out. In the case of collisionally excited lines, PyNeb works by solving the equilibrium equations for an n-level atom; in the case of recombination lines, it works by interpolation in emissivity tables. The code offers a choice of extinction laws and ionization correction factors, which can be complemented by user-provided recipes. It is entirely written in the python programming language and uses standard python libraries. It is fully vectorized, making it apt for analyzing huge amounts of data. The code is stable and has been benchmarked against IRAF/NEBULAR. It is public, fully documented, and has already been satisfactorily used in a number of published papers.

  7. Using SPARK as a Solver for Modelica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Wetter, Michael; Haves, Philip

    Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less

  8. What Does It Take to Produce Interpretation? Informational, Peircean, and Code-Semiotic Views on Biosemiotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brier, Soren; Joslyn, Cliff A.

    2013-04-01

    This paper presents a critical analysis of code-semiotics, which we see as the latest attempt to create paradigmatic foundation for solving the question of the emergence of life and consciousness. We view code semiotics as a an attempt to revise the empirical scientific Darwinian paradigm, and to go beyond the complex systems, emergence, self-organization, and informational paradigms, and also the selfish gene theory of Dawkins and the Peircean pragmaticist semiotic theory built on the simultaneous types of evolution. As such it is a new and bold attempt to use semiotics to solve the problems created by the evolutionary paradigm’s commitmentmore » to produce a theory of how to connect the two sides of the Cartesian dualistic view of physical reality and consciousness in a consistent way.« less

  9. Laminar Heating Validation of the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Dries, Kevin M.

    2005-01-01

    OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.

  10. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  11. Simulation of the microwave heating of a thin multilayered composite material: A parameter analysis

    NASA Astrophysics Data System (ADS)

    Tertrais, Hermine; Barasinski, Anaïs; Chinesta, Francisco

    2018-05-01

    Microwave (MW) technology relies on volumetric heating. Thermal energy is transferred to the material that can absorb it at specific frequencies. The complex physics involved in this process is far from being understood and that is why a simulation tool has been developed in order to solve the electromagnetic and thermal equations in such a complex material as a multilayered composite part. The code is based on the in-plane-out-of-plane separated representation within the Proper Generalized Decomposition framework. To improve the knowledge on the process, a parameter study in carried out in this paper.

  12. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  13. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  14. PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas

    The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less

  15. Application of the High Gradient hydrodynamics code to simulations of a two-dimensional zero-pressure-gradient turbulent boundary layer over a flat plate

    NASA Astrophysics Data System (ADS)

    Kaiser, Bryan E.; Poroseva, Svetlana V.; Canfield, Jesse M.; Sauer, Jeremy A.; Linn, Rodman R.

    2013-11-01

    The High Gradient hydrodynamics (HIGRAD) code is an atmospheric computational fluid dynamics code created by Los Alamos National Laboratory to accurately represent flows characterized by sharp gradients in velocity, concentration, and temperature. HIGRAD uses a fully compressible finite-volume formulation for explicit Large Eddy Simulation (LES) and features an advection scheme that is second-order accurate in time and space. In the current study, boundary conditions implemented in HIGRAD are varied to find those that better reproduce the reduced physics of a flat plate boundary layer to compare with complex physics of the atmospheric boundary layer. Numerical predictions are compared with available DNS, experimental, and LES data obtained by other researchers. High-order turbulence statistics are collected. The Reynolds number based on the free-stream velocity and the momentum thickness is 120 at the inflow and the Mach number for the flow is 0.2. Results are compared at Reynolds numbers of 670 and 1410. A part of the material is based upon work supported by NASA under award NNX12AJ61A and by the Junior Faculty UNM-LANL Collaborative Research Grant.

  16. XPATCH: a high-frequency electromagnetic scattering prediction code using shooting and bouncing rays

    NASA Astrophysics Data System (ADS)

    Hazlett, Michael; Andersh, Dennis J.; Lee, Shung W.; Ling, Hao; Yu, C. L.

    1995-06-01

    This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time domain signatures, and synthetic aperture radar (SAR) images of realistic 3-D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, curved surfaces, or solid geometries. The computer code, XPATCH, based on the shooting and bouncing ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. XPATCH computes the first-bounce physical optics plus the physical theory of diffraction contributions and the multi-bounce ray contributions for complex vehicles with materials. It has been found that the multi-bounce contributions are crucial for many aspect angles of all classes of vehicles. Without the multi-bounce calculations, the radar return is typically 10 to 15 dB too low. Examples of predicted range profiles, SAR imagery, and radar cross sections (RCS) for several different geometries are compared with measured data to demonstrate the quality of the predictions. The comparisons are from the UHF through the Ka frequency ranges. Recent enhancements to XPATCH for MMW applications and target Doppler predictions are also presented.

  17. Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications

    NASA Technical Reports Server (NTRS)

    Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.

    2016-01-01

    Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.

  18. Initial Ada components evaluation

    NASA Technical Reports Server (NTRS)

    Moebes, Travis

    1989-01-01

    The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.

  19. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  20. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  1. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be... basic color for designating caution and for marking physical hazards such as: Striking against...

  2. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  3. CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems

    DTIC Science & Technology

    2018-04-19

    AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4099 5c.  PROGRAM ELEMENT...Institute of Technology Kanpur India Final Report for AOARD Grant “CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical

  4. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task, a bound similar to the "encoding" bound governing how much the algorithm information complexity of a Turing machine calculation can differ for two reference universal Turing machines. Finally, it is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.

  5. From policy to practice: implementation of physical activity and food policies in schools

    PubMed Central

    2013-01-01

    Purpose Public policies targeting the school setting are increasingly being used to address childhood obesity; however, their effectiveness depends on their implementation. This study explores the factors which impeded or facilitated the implementation of publicly mandated school-based physical activity and nutrition guidelines in the province of British Columbia (BC), Canada. Methods Semi-structured interviews were conducted with 50 school informants (17 principals - 33 teacher/school informants) to examine the factors associated with the implementation of the mandated Daily Physical Activity (DPA) and Food and Beverage Sales in Schools (FBSS) guidelines. Coding used a constructivist grounded theory approach. The first five transcripts and every fifth transcript thereafter were coded by two independent coders with discrepancies reconciled by a third coder. Data was coded and analysed in the NVivo 9 software. Concept maps were developed and current theoretical perspectives were integrated in the later stages of analysis. Results The Diffusion of Innovations Model provided an organizing framework to present emergent themes. With the exception of triability (not relevant in the context of mandated guidelines/policies), the key attributes of the Diffusion of Innovations Model (relative advantage, compatibility, complexity, and observability) provided a robust framework for understanding themes associated with implementation of mandated guidelines. Specifically, implementation of the DPA and FBSS guidelines was facilitated by perceptions that they: were relatively advantageous compared to status quo; were compatible with school mandates and teaching philosophies; had observable positive impacts and impeded when perceived as complex to understand and implement. In addition, a number of contextual factors including availability of resources facilitated implementation. Conclusions The enactment of mandated policies/guidelines for schools is considered an essential step in improving physical activity and healthy eating. However, policy makers need to: monitor whether schools are able to implement the guidelines, support schools struggling with implementation, and document the impact of the guidelines on students’ behaviors. To facilitate the implementation of mandated guidelines/policies, the Diffusion of Innovations Model provides an organizational framework for planning interventions. Changing the school environment is a process which cannot be undertaken solely by passive means as we know that such approaches have not resulted in adequate implementation. PMID:23731803

  6. From policy to practice: implementation of physical activity and food policies in schools.

    PubMed

    Mâsse, Louise C; Naiman, Daniel; Naylor, Patti-Jean

    2013-06-03

    Public policies targeting the school setting are increasingly being used to address childhood obesity; however, their effectiveness depends on their implementation. This study explores the factors which impeded or facilitated the implementation of publicly mandated school-based physical activity and nutrition guidelines in the province of British Columbia (BC), Canada. Semi-structured interviews were conducted with 50 school informants (17 principals - 33 teacher/school informants) to examine the factors associated with the implementation of the mandated Daily Physical Activity (DPA) and Food and Beverage Sales in Schools (FBSS) guidelines. Coding used a constructivist grounded theory approach. The first five transcripts and every fifth transcript thereafter were coded by two independent coders with discrepancies reconciled by a third coder. Data was coded and analysed in the NVivo 9 software. Concept maps were developed and current theoretical perspectives were integrated in the later stages of analysis. The Diffusion of Innovations Model provided an organizing framework to present emergent themes. With the exception of triability (not relevant in the context of mandated guidelines/policies), the key attributes of the Diffusion of Innovations Model (relative advantage, compatibility, complexity, and observability) provided a robust framework for understanding themes associated with implementation of mandated guidelines. Specifically, implementation of the DPA and FBSS guidelines was facilitated by perceptions that they: were relatively advantageous compared to status quo; were compatible with school mandates and teaching philosophies; had observable positive impacts and impeded when perceived as complex to understand and implement. In addition, a number of contextual factors including availability of resources facilitated implementation. The enactment of mandated policies/guidelines for schools is considered an essential step in improving physical activity and healthy eating. However, policy makers need to: monitor whether schools are able to implement the guidelines, support schools struggling with implementation, and document the impact of the guidelines on students' behaviors. To facilitate the implementation of mandated guidelines/policies, the Diffusion of Innovations Model provides an organizational framework for planning interventions. Changing the school environment is a process which cannot be undertaken solely by passive means as we know that such approaches have not resulted in adequate implementation.

  7. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE.

    PubMed

    Sempau, J; Sánchez-Reyes, A; Salvat, F; ben Tahar, H O; Jiang, S B; Fernández-Varea, J M

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the 'latent' variance in the phase-space file, are discussed in detail.

  8. Direct G-code manipulation for 3D material weaving

    NASA Astrophysics Data System (ADS)

    Koda, S.; Tanaka, H.

    2017-04-01

    The process of conventional 3D printing begins by first build a 3D model, then convert to the model to G-code via a slicer software, feed the G-code to the printer, and finally start the printing. The most simple and popular 3D printing technique is Fused Deposition Modeling. However, in this method, the printing path that the printer head can take is restricted by the G-code. Therefore the printed 3D models with complex pattern have structural errors like holes or gaps between the printed material lines. In addition, the structural density and the material's position of the printed model are difficult to control. We realized the G-code editing, Fabrix, for making a more precise and functional printed model with both single and multiple material. The models with different stiffness are fabricated by the controlling the printing density of the filament materials with our method. In addition, the multi-material 3D printing has a possibility to expand the physical properties by the material combination and its G-code editing. These results show the new printing method to provide more creative and functional 3D printing techniques.

  9. Salvus: A flexible open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.

    2016-12-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  10. Salvus: A flexible high-performance and open-source package for waveform modelling and inversion from laboratory to global scales

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.

  11. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  13. Monte Carlo Simulations of the Formation Flying Dynamics for the Magnetospheric Multiscale (MMS) Mission

    NASA Technical Reports Server (NTRS)

    Schiff, Conrad; Dove, Edwin

    2011-01-01

    The MMS mission is an ambitious space physics mission that will fly 4 spacecraft in a tetrahedron formation in a series of highly elliptical orbits in order to study magnetic reconnection in the Earth's magnetosphere. The mission design is comprised of a combination of deterministic orbit adjust and random maintenance maneuvers distributed over the 2.5 year mission life. Formal verification of the requirements is achieved by analysis through the use of the End-to-End (ETE) code, which is a modular simulation of the maneuver operations over the entire mission duration. Error models for navigation accuracy (knowledge) and maneuver execution (control) are incorporated to realistically simulate the possible maneuver scenarios that might be realized These error models, coupled with the complex formation flying physics, lead to non-trivial effects that must be taken into account by the ETE automation. Using the ETE code, the MMS Flight Dynamics team was able to demonstrate that the current mission design satisfies the mission requirements.

  14. Efficient 3D kinetic Monte Carlo method for modeling of molecular structure and dynamics.

    PubMed

    Panshenskov, Mikhail; Solov'yov, Ilia A; Solov'yov, Andrey V

    2014-06-30

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system. Copyright © 2014 Wiley Periodicals, Inc.

  15. ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems.

    PubMed

    Niethammer, Christoph; Becker, Stefan; Bernreuther, Martin; Buchholz, Martin; Eckhardt, Wolfgang; Heinecke, Alexander; Werth, Stephan; Bungartz, Hans-Joachim; Glass, Colin W; Hasse, Hans; Vrabec, Jadran; Horsch, Martin

    2014-10-14

    The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.

  16. DualSPHysics: A numerical tool to simulate real breakwaters

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho

    2018-02-01

    The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.

  17. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  18. An introduction to the spectrum, symmetries, and dynamics of spin-1/2 Heisenberg chains

    NASA Astrophysics Data System (ADS)

    Joel, Kira; Kollmar, Davida; Santos, Lea F.

    2013-06-01

    Quantum spin chains are prototype quantum many-body systems that are employed in the description of various complex physical phenomena. We provide an introduction to this subject by focusing on the time evolution of a Heisenberg spin-1/2 chain and interpreting the results based on the analysis of the eigenvalues, eigenstates, and symmetries of the system. We make available online all computer codes used to obtain our data.

  19. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as part of their continuing efforts to support the users of the FLUKA code within the particle physics community. In keeping with the spirit of developing an evolving physics code, we are planning as part of this project, to participate in the efforts to validate the core FLUKA physics in ground-based accelerator test runs. The emphasis of these test runs will be the physics of greatest interest in the simulation of the space radiation environment. Such a tool will be of great value to planners, designers and operators of future space missions, as well as for the design of the vehicles and habitats to be used on such missions. It will also be of aid to future experiments of various kinds that may be affected at some level by the ambient radiation environment, or in the analysis of hybrid experiment designs that have been discussed for space-based astronomy and astrophysics. The tool will be of value to the Life Sciences personnel involved in the prediction and measurement of radiation doses experienced by the crewmembers on such missions. In addition, the tool will be of great use to the planners of experiments to measure and evaluate the space radiation environment itself. It can likewise be useful in the analysis of safe havens, hazard migration plans, and NASA's call for new research in composites and to NASA engineers modeling the radiation exposure of electronic circuits. This code will provide an important complimentary check on the predictions of analytic codes such as BRYNTRN/HZETRN that are presently used for many similar applications, and which have shortcomings that are more easily overcome with Monte Carlo type simulations. Finally, it is acknowledged that there are similar efforts based around the use of the GEANT4 Monte-Carlo transport code currently under development at CERN. It is our intention to make our software modular and sufficiently flexible to allow the parallel use of either FLUKA or GEANT4 as the physics transport engine.

  20. Methods of treating complex space vehicle geometry for charged particle radiation transport

    NASA Technical Reports Server (NTRS)

    Hill, C. W.

    1973-01-01

    Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.

  1. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  2. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  3. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  4. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  5. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  6. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  7. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  8. On complexity of trellis structure of linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1990-01-01

    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.

  9. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  10. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  11. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  12. Transition to international classification of disease version 10, clinical modification: the impact on internal medicine and internal medicine subspecialties.

    PubMed

    Caskey, Rachel N; Abutahoun, Angelos; Polick, Anne; Barnes, Michelle; Srivastava, Pavan; Boyd, Andrew D

    2018-05-04

    The US health care system uses diagnostic codes for billing and reimbursement as well as quality assessment and measuring clinical outcomes. The US transitioned to the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) on October, 2015. Little is known about the impact of ICD-10-CM on internal medicine and medicine subspecialists. We used a state-wide data set from Illinois Medicaid specified for Internal Medicine providers and subspecialists. A total of 3191 ICD-9-CM codes were used for 51,078 patient encounters, for a total cost of US $26,022,022 for all internal medicine. We categorized all of the ICD-9-CM codes based on the complexity of mapping to ICD-10-CM as codes with complex mapping could result in billing or administrative errors during the transition. Codes found to have complex mapping and frequently used codes (n = 295) were analyzed for clinical accuracy of mapping to ICD-10-CM. Each subspecialty was analyzed for complexity of codes used and proportion of reimbursement associated with complex codes. Twenty-five percent of internal medicine codes have convoluted mapping to ICD-10-CM, which represent 22% of Illinois Medicaid patients, and 30% of reimbursements. Rheumatology and Endocrinology had the greatest proportion of visits and reimbursement associated with complex codes. We found 14.5% of ICD-9-CM codes used by internists, when mapped to ICD-10-CM, resulted in potential clinical inaccuracies. We identified that 43% of diagnostic codes evaluated and used by internists and that account for 14% of internal medicine reimbursements are associated with codes which could result in administrative errors.

  13. Emerging/changing fashion trends and their impact on conduct of anaesthesia.

    PubMed

    Zaidi, Nadeem

    2017-11-01

    One of the innate features of human behaviour is to enhance personal image in order to look different from the rest of the crowd and to satisfy a need for individualism. People use different dress codes, body makeup and artificial gadgets to improve their personal and physical appearance. The main motive behind all these efforts is personal satisfaction, to appear attractive to others and to overcome phobias and complexes. Copyright the Association for Perioperative Practice.

  14. Implementing a modeling software for animated protein-complex interactions using a physics simulation library.

    PubMed

    Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko

    2014-12-01

    To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.

  15. A Massively Parallel Particle Code for Rarefied Ionized and Neutral Gas Flows in Earth and Planetary Atmospheres, Ionospheres and Magnetospheres

    NASA Technical Reports Server (NTRS)

    Combi, Michael R.

    2004-01-01

    In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important.

  16. Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments

    DOE PAGES

    Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...

    2016-06-13

    We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less

  17. An efficient hybrid technique in RCS predictions of complex targets at high frequencies

    NASA Astrophysics Data System (ADS)

    Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe

    2017-09-01

    Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.

  18. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  19. National Combustion Code Validated Against Lean Direct Injection Flow Field Data

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony C.

    2003-01-01

    Most combustion processes have, in some way or another, a recirculating flow field. This recirculation stabilizes the reaction zone, or flame, but an unnecessarily large recirculation zone can result in high nitrogen oxide (NOx) values for combustion systems. The size of this recirculation zone is crucial to the performance of state-of-the-art, low-emissions hardware. If this is a large-scale combustion process, the flow field will probably be turbulent and, therefore, three-dimensional. This research dealt primarily with flow fields resulting from lean direct injection (LDI) concepts, as described in Research & Technology 2001. LDI is a concept that depends heavily on the design of the swirler. The LDI concept has the potential to reduce NOx values from 50 to 70 percent of current values, with good flame stability characteristics. It is cost effective and (hopefully) beneficial to do most of the design work for an LDI swirler using computer-aided design (CAD) and computer-aided engineering (CAE) tools. Computational fluid dynamics (CFD) codes are CAE tools that can calculate three-dimensional flows in complex geometries. However, CFD codes are only beginning to correctly calculate the flow fields for complex devices, and the related combustion models usually remove a large portion of the flow physics.

  20. A Secure Information Framework with APRQ Properties

    NASA Astrophysics Data System (ADS)

    Rupa, Ch.

    2017-08-01

    Internet of the things is the most trending topics in the digital world. Security issues are rampant. In the corporate or institutional setting, security risks are apparent from the outset. Market leaders are unable to use the cryptographic techniques due to their complexities. Hence many bits of private information, including ID, are readily available for third parties to see and to utilize. There is a need to decrease the complexity and increase the robustness of the cryptographic approaches. In view of this, a new cryptographic technique as good encryption pact with adjacency, random prime number and quantum code properties has been proposed. Here, encryption can be done by using quantum photons with gray code. This approach uses the concepts of physics and mathematics with no external key exchange to improve the security of the data. It also reduces the key attacks by generation of a key at the party side instead of sharing. This method makes the security more robust than with the existing approach. Important properties of gray code and quantum are adjacency property and different photons to a single bit (0 or 1). These can reduce the avalanche effect. Cryptanalysis of the proposed method shows that it is resistant to various attacks and stronger than the existing approaches.

  1. The importance of physical function to people with osteoporosis.

    PubMed

    Kerr, C; Bottomley, C; Shingler, S; Giangregorio, L; de Freitas, H M; Patel, C; Randall, S; Gold, D T

    2017-05-01

    There is increasing need to understand patient outcomes in osteoporosis. This article discusses that fracture in osteoporosis can lead to a cycle of impairment, driven by complex psychosocial factors, having a profound impact on physical function/activity which accumulates over time. More information is required on how treatments impact physical function. There is increasing need to understand patient-centred outcomes in osteoporosis (OP) clinical research and management. This multi-method paper provides insight on the effect of OP on patients' physical function and everyday activity. Data were collected from three sources: (1) targeted literature review on OP and physical function, conducted in MEDLINE, Embase and PsycINFO; (2) secondary thematic analysis of transcripts from patient interviews, conducted to develop a patient-reported outcome instrument. Transcripts were re-coded to focus on OP impact on daily activities and physical function for those with and without fracture history; and (3) discussions of the literature review and secondary qualitative analysis results with three clinical experts to review and interpret the importance and implications of the findings. Results suggest that OP, particularly with fracture, can have profound impacts on physical function/activity. These impacts accumulate over time through a cycle of impairment, as fracture leads to longer term detriments in physical function, including loss of muscle, activity avoidance and reduced physical capacity, which in turn leads to greater risk of fracture and potential for further physical restrictions. The cycle of impairment is complex, as other physical, psychosocial and treatment-related factors, such as comorbidities, fears and beliefs about physical activity and fracture risk influence physical function and everyday activity. More information on how treatments impact physical function would benefit healthcare professionals and persons with OP in making treatment decisions and improving treatment compliance/persistence, as these impacts may be more salient to patients than fracture incidence.

  2. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  3. Magnetic Feature Tracking in the SDO Era: Past Sacrifices, Recent Advances, and Future Possibilities

    NASA Astrophysics Data System (ADS)

    Lamb, D. A.; DeForest, C. E.; Van Kooten, S.

    2014-12-01

    When implementing computer vision codes, a common reaction to the high angular resolution and the high cadence of SDO's image products has been to reduce the resolution and cadence of the data so that it "looks like" SOHO data. This can be partially justified on physical grounds: if the phenomenon that a computer vision code is trying to detect was characterized in low-resolution, low cadence data, then the higher quality data may not be needed. But sacrificing at least two, and sometimes all four main advantages of SDO's imaging data (the other two being a higher duty cycle and additional data products) threatens to also discard the perhaps more subtle discoveries waiting to be made: a classic baby-with-the-bath-water situation. In this presentation, we discuss some of the sacrifices made in implementing SWAMIS-EF, an automatic emerging magnetic flux region detection code for SDO/HMI, and how those sacrifices simultaneously simplified and complicated development of the code. SWAMIS-EF is a feature-finding code, and we will describe some situations and analyses in which a feature-finding code excels, and some in which a different type of algorithm may produce more favorable results. In particular, because the solar magnetic field is irreducibly complex at the currently observed spatial scales, searching for phenomena such as flux emergence using even semi-strict physical criteria often leads to large numbers of false or missed detections. This undesirable behavior can be mitigated by relaxing the imposed physical criteria, but here too there are tradeoffs: decreased numbers of missed detections may increase the number of false detections if the selection criteria are not both sensitive and specific to the searched-for phenomenon. Finally, we describe some recent steps we have taken to overcome these obstacles, by fully embracing the high resolution, high cadence SDO data, optimizing and partially parallelizing our existing code as a first step to allow fast magnetic feature tracking of full resolution HMI magnetograms. Even with the above caveats, if used correctly such a tool can provide a wealth of information on the positions, motions, and patterns of features, enabling large, cross-scale analyses that can answer important questions related to the solar dynamo and to coronal heating.

  4. Software Tools for Stochastic Simulations of Turbulence

    DTIC Science & Technology

    2015-08-28

    client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF

  5. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  6. Processing Motion: Using Code to Teach Newtonian Physics

    NASA Astrophysics Data System (ADS)

    Massey, M. Ryan

    Prior to instruction, students often possess a common-sense view of motion, which is inconsistent with Newtonian physics. Effective physics lessons therefore involve conceptual change. To provide a theoretical explanation for concepts and how they change, the triangulation model brings together key attributes of prototypes, exemplars, theories, Bayesian learning, ontological categories, and the causal model theory. The triangulation model provides a theoretical rationale for why coding is a viable method for physics instruction. As an experiment, thirty-two adolescent students participated in summer coding academies to learn how to design Newtonian simulations. Conceptual and attitudinal data was collected using the Force Concept Inventory and the Colorado Learning Attitudes about Science Survey. Results suggest that coding is an effective means for teaching Newtonian physics.

  7. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  8. Advanced Multi-Physics (AMP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby

    2012-06-01

    The Advanced Multi-Physics (AMP) code, in its present form, will allow a user to build a multi-physics application code for existing mechanics and diffusion operators and extend them with user-defined material models and new physics operators. There are examples that demonstrate mechanics, thermo-mechanics, coupled diffusion, and mechanical contact. The AMP code is designed to leverage a variety of mathematical solvers (PETSc, Trilinos, SUNDIALS, and AMP solvers) and mesh databases (LibMesh and AMP) in a consistent interchangeable approach.

  9. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  10. Broadband transmission-type coding metamaterial for wavefront manipulation for airborne sound

    NASA Astrophysics Data System (ADS)

    Li, Kun; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-chun

    2018-07-01

    The recent advent of coding metamaterials, as a new class of acoustic metamaterials, substantially reduces the complexity in the design and fabrication of acoustic functional devices capable of manipulating sound waves in exotic manners by arranging coding elements with discrete phase states in specific sequences. It is therefore intriguing, both physically and practically, to pursue a mechanism for realizing broadband acoustic coding metamaterials that control transmitted waves with a fine resolution of the phase profile. Here, we propose the design of a transmission-type acoustic coding device and demonstrate its metamaterial-based implementation. The mechanism is that, instead of relying on resonant coding elements that are necessarily narrow-band, we build weak-resonant coding elements with a helical-like metamaterial with a continuously varying pitch that effectively expands the working bandwidth while maintaining the sub-wavelength resolution of the phase profile that is vital for the production of complicated wave fields. The effectiveness of our proposed scheme is numerically verified via the demonstration of three distinctive examples of acoustic focusing, anomalous refraction, and vortex beam generation in the prescribed frequency band on the basis of 1- and 2-bit coding sequences. Simulation results agree well with theoretical predictions, showing that the designed coding devices with discrete phase profiles are efficient in engineering the wavefront of outcoming waves to form the desired spatial pattern. We anticipate the realization of coding metamaterials with broadband functionality and design flexibility to open up possibilities for novel acoustic functional devices for the special manipulation of transmitted waves and underpin diverse applications ranging from medical ultrasound imaging to acoustic detections.

  11. The effect of gas physics on the halo mass function

    NASA Astrophysics Data System (ADS)

    Stanek, R.; Rudd, D.; Evrard, A. E.

    2009-03-01

    Cosmological tests based on cluster counts require accurate calibration of the space density of massive haloes, but most calibrations to date have ignored complex gas physics associated with halo baryons. We explore the sensitivity of the halo mass function to baryon physics using two pairs of gas-dynamic simulations that are likely to bracket the true behaviour. Each pair consists of a baseline model involving only gravity and shock heating, and a refined physics model aimed at reproducing the observed scaling of the hot, intracluster gas phase. One pair consists of billion-particle resimulations of the original 500h-1Mpc Millennium Simulation of Springel et al., run with the smoothed particle hydrodynamics (SPH) code GADGET-2 and using a refined physics treatment approximated by pre-heating (PH) at high redshift. The other pair are high-resolution simulations from the adaptive-mesh refinement code ART, for which the refined treatment includes cooling, star formation and supernova feedback (CSF). We find that, although the mass functions of the gravity-only (GO) treatments are consistent with the recent calibration of Tinker et al. (2008), both pairs of simulations with refined baryon physics show significant deviations. Relative to the GO case, the masses of ~1014h-1Msolar haloes in the PH and CSF treatments are shifted by the averages of -15 +/- 1 and +16 +/- 2 per cent, respectively. These mass shifts cause ~30 per cent deviations in number density relative to the Tinker function, significantly larger than the 5 per cent statistical uncertainty of that calibration.

  12. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    PubMed

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The QuakeSim Project: Numerical Simulations for Active Tectonic Processes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry

    2004-01-01

    In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.

  14. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  15. Physics and biophysics experiments needed for improved risk assessment in space

    NASA Astrophysics Data System (ADS)

    Sihver, L.

    To improve the risk assessment of radiation carcinogenesis, late degenerative tissue effects, acute syndromes, synergistic effects of radiation and microgravity or other spacecraft factors, and hereditary effects, on future LEO and interplanetary space missions, the radiobiological effects of cosmic radiation before and after shielding must be well understood. However, cosmic radiation is very complex and includes low and high LET components of many different neutral and charged particles. The understanding of the radiobiology of the heavy ions, from GCRs and SPEs, is still a subject of great concern due to the complicated dependence of their biological effects on the type of ion and energy, and its interaction with various targets both outside and within the spacecraft and the human body. In order to estimate the biological effects of cosmic radiation, accurate knowledge of the physics of the interactions of both charged and non-charged high-LET particles is necessary. Since it is practically impossible to measure all primary and secondary particles from all projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes might be a helpful instrument to overcome those difficulties. These codes have to be carefully validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground-based accelerator experiments are needed. In this paper current and future physics and biophysics experiments needed for improved risk assessment in space will be discussed. The cyclotron HIRFL (heavy ion research facility in Lanzhou) and the new synchrotron CSR (cooling storage ring), which can be used to provide ion beams for space related experiments at the Institute of Modern Physics, Chinese Academy of Sciences (IMP-CAS), will be presented together with the physical and biomedical research performed at IMP-CAS.

  16. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  17. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  18. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  19. Calculations vs. measurements of remnant dose rates for SNS spent structures

    NASA Astrophysics Data System (ADS)

    Popova, I. I.; Gallmeier, F. X.; Trotter, S.; Dayton, M.

    2018-06-01

    Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction. Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.

  20. Online Tools for Astronomy and Cosmochemistry

    NASA Technical Reports Server (NTRS)

    Meyer, B. S.

    2005-01-01

    Over the past year, the Webnucleo Group at Clemson University has been developing a web site with a number of interactive online tools for astronomy and cosmochemistry applications. The site uses SHP (Simplified Hypertext Preprocessor), which, because of its flexibility, allows us to embed almost any computer language into our web pages. For a description of SHP, please see http://www.joeldenny.com/ At our web site, an internet user may mine large and complex data sets, such as our stellar evolution models, and make graphs or tables of the results. The user may also run some of our detailed nuclear physics and astrophysics codes, such as our nuclear statistical equilibrium code, which is written in fortran and C. Again, the user may make graphs and tables and download the results.

  1. Calculations vs. measurements of remnant dose rates for SNS spent structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Irina I.; Gallmeier, Franz X.; Trotter, Steven M.

    Residual dose rate measurements were conducted on target vessel #13 and proton beam window #5 after extraction from their service locations. These measurements were used to verify calculation methods of radionuclide inventory assessment that are typically performed for nuclear waste characterization and transportation of these structures. Neutronics analyses for predicting residual dose rates were carried out using the transport code MCNPX and the transmutation code CINDER90. For transport analyses complex and rigorous geometry model of the structures and their surrounding are applied. The neutronics analyses were carried out using Bertini and CEM high energy physics models for simulating particles interaction.more » Obtained preliminary calculational results were analysed and compared to the measured dose rates and overall are showing good agreement with in 40% in average.« less

  2. Moving the Barricades to Physical Activity: A Qualitative Analysis of Open Streets Initiatives Across the United States.

    PubMed

    Eyler, Amy A; Hipp, J Aaron; Lokuta, Julie

    2015-01-01

    Ciclovía, or Open Streets initiatives, are events where streets are opened for physical activity and closed to motorized traffic. Although the initiatives are gaining popularity in the United States, little is known about planning and implementing them. The goals of this paper are to explore the development and implementation of Open Streets initiatives and make recommendations for increasing the capacity of organizers to enhance initiative success. Phenomenology with qualitative analysis of structured interviews was used. Study setting was urban and suburban communities in the United States. Study participants were organizers of Open Streets initiatives in U.S. cities. Using a list of 47 events held in 2011, 27 lead organizers were interviewed by telephone about planning, implementation, and lessons learned. The interviews were digitally recorded and transcribed. A phenomenologic approach was used, an initial coding tool was developed after reviewing a sample of transcripts, and constant comparative coding methodology was applied. Themes and subthemes were generated from codes. The most common reasons for initiation were to highlight or improve health and transportation. Most initiatives aimed to reach the general population, but some targeted families, children, or specific neighborhoods. Getting people to understand the concept of Open Streets was an important challenge. Other challenges included lack of funding and personnel, and complex logistics. These initiatives democratize public space for citizens while promoting physical activity, social connectedness, and other broad agendas. There are opportunities for the research community to contribute to the expanse and sustainability of Open Streets, particularly in evaluation and dissemination.

  3. Recent advances in non-LTE stellar atmosphere models

    NASA Astrophysics Data System (ADS)

    Sander, Andreas A. C.

    2017-11-01

    In the last decades, stellar atmosphere models have become a key tool in understanding massive stars. Applied for spectroscopic analysis, these models provide quantitative information on stellar wind properties as well as fundamental stellar parameters. The intricate non-LTE conditions in stellar winds dictate the development of adequate sophisticated model atmosphere codes. The increase in both, the computational power and our understanding of physical processes in stellar atmospheres, led to an increasing complexity in the models. As a result, codes emerged that can tackle a wide range of stellar and wind parameters. After a brief address of the fundamentals of stellar atmosphere modeling, the current stage of clumped and line-blanketed model atmospheres will be discussed. Finally, the path for the next generation of stellar atmosphere models will be outlined. Apart from discussing multi-dimensional approaches, I will emphasize on the coupling of hydrodynamics with a sophisticated treatment of the radiative transfer. This next generation of models will be able to predict wind parameters from first principles, which could open new doors for our understanding of the various facets of massive star physics, evolution, and death.

  4. PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments

    NASA Astrophysics Data System (ADS)

    Gaede, F.; Hegner, B.; Mato, P.

    2017-10-01

    PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.

  5. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  6. Volcano Modelling and Simulation gateway (VMSg): A new web-based framework for collaborative research in physical modelling and simulation of volcanic phenomena

    NASA Astrophysics Data System (ADS)

    Esposti Ongaro, T.; Barsotti, S.; de'Michieli Vitturi, M.; Favalli, M.; Longo, A.; Nannipieri, L.; Neri, A.; Papale, P.; Saccorotti, G.

    2009-12-01

    Physical and numerical modelling is becoming of increasing importance in volcanology and volcanic hazard assessment. However, new interdisciplinary problems arise when dealing with complex mathematical formulations, numerical algorithms and their implementations on modern computer architectures. Therefore new frameworks are needed for sharing knowledge, software codes, and datasets among scientists. Here we present the Volcano Modelling and Simulation gateway (VMSg, accessible at http://vmsg.pi.ingv.it), a new electronic infrastructure for promoting knowledge growth and transfer in the field of volcanological modelling and numerical simulation. The new web portal, developed in the framework of former and ongoing national and European projects, is based on a dynamic Content Manager System (CMS) and was developed to host and present numerical models of the main volcanic processes and relationships including magma properties, magma chamber dynamics, conduit flow, plume dynamics, pyroclastic flows, lava flows, etc. Model applications, numerical code documentation, simulation datasets as well as model validation and calibration test-cases are also part of the gateway material.

  7. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  8. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  9. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  10. Simulations of Coherent Synchrotron Radiation Effects in Electron Machines

    NASA Astrophysics Data System (ADS)

    Migliorati, M.; Schiavi, A.; Dattoli, G.

    2007-09-01

    Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.

  11. Simulations of Coherent Synchrotron Radiation Effects in Electron Machines

    NASA Astrophysics Data System (ADS)

    Migliorati, M.; Schiavi, A.; Dattoli, G.

    Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.

  12. BRD4 assists elongation of both coding and enhancer RNAs guided by histone acetylation

    PubMed Central

    Kanno, Tomohiko; Kanno, Yuka; LeRoy, Gary; Campos, Eric; Sun, Hong-Wei; Brooks, Stephen R; Vahedi, Golnaz; Heightman, Tom D; Garcia, Benjamin A; Reinberg, Danny; Siebenlist, Ulrich; O’Shea, John J; Ozato, Keiko

    2016-01-01

    Small-molecule BET inhibitors interfere with the epigenetic interactions between acetylated histones and the bromodomains of the BET family proteins, including BRD4, and they potently inhibit growth of malignant cells by targeting cancer-promoting genes. BRD4 interacts with the pause-release factor P-TEFb, and has been proposed to release Pol II from promoter-proximal pausing. We show that BRD4 occupied widespread genomic regions in mouse cells, and directly stimulated elongation of both protein-coding transcripts and non-coding enhancer RNAs (eRNAs), dependent on the function of bromodomains. BRD4 interacted physically with elongating Pol II complexes, and assisted Pol II progression through hyper-acetylated nucleosomes by interacting with acetylated histones via bromodomains. On active enhancers, the BET inhibitor JQ1 antagonized BRD4-associated eRNA synthesis. Thus, BRD4 is involved in multiple steps of the transcription hierarchy, primarily by assisting transcript elongation both at enhancers and on gene bodies. PMID:25383670

  13. Numerical Simulations of Spacecraft Charging: Selected Applications

    NASA Astrophysics Data System (ADS)

    Moulton, J. D.; Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Vernon, L.; Borovsky, J.; Thomsen, M. F.

    2016-12-01

    The electrical charging of spacecraft due to bombarding charged particles affects their performance and operation. We study this charging using CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC is based on multi-block curvilinear meshes, resulting in near-optimal computational performance while maintaining geometric accuracy. It is interfaced to a mesh generator that creates a computational mesh conforming to complex objects like a spacecraft. Relevant plasma parameters can be imported from the SHIELDS framework (currently under development at LANL), which simulates geomagnetic storms and substorms in the Earth's magnetosphere. Selected physics results will be presented, together with an overview of the code. The physics results include spacecraft-charging simulations with geometry representative of the Van Allen Probes spacecraft, focusing on the conditions that can lead to significant spacecraft charging events. Second, results from a recent study that investigates the conditions for which a high-power (>keV) electron beam could be emitted from a magnetospheric spacecraft will be presented. The latter study proposes a spacecraft-charging mitigation strategy based on the plasma contactor technology that might allow beam experiments to operate in the low-density magnetosphere. High-power electron beams could be used for instance to establish magnetic-field-line connectivity between ionosphere and magnetosphere and help solving long-standing questions in ionospheric/magnetospheric physics.

  14. Preparing a cost analysis for the section of medical physics-guidelines and methods.

    PubMed

    Mills, M D; Spanos, W J; Jose, B O; Kelly, B A; Brill, J P

    2000-01-01

    Radiation oncology is a highly complex medical specialty, involving many varied routine and special procedures. To assure cost-effectiveness and maintain support for the medical physics program, managers are obligated to analyze and defend all aspects of an institutional billing and cost-reporting program. Present standards of practice require that each patient's radiation treatments be customized to fit his/her particular condition. Since the use of personnel time and other resources is highly variable among patients, graduated levels of charges have been established to allow for more precise billing. Some radiation oncology special procedures have no specific code descriptors; so existing codes are modified or additional information attached in order to avoid payment denial. Recent publications have explored the manpower needs, salaries, and other resources required to perform radiation oncology "physics" procedures. This information is used to construct a model cost-based resource use profile for a radiation oncology center. This profile can be used to help the financial officer prepare a cost report for the institution. Both civil and criminal penalties for Medicare fraud and abuse (intentional or unintentional) are included in the False Claims Act and other statutes. Compliance guidelines require managers to train all personnel in correct billing procedures and to review continually billing performance.

  15. Computation of Steady and Unsteady Laminar Flames: Theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai

    1999-01-01

    In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.

  16. Active Learning for Directed Exploration of Complex Systems

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.; Wang, Esther

    2009-01-01

    Physics-based simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. Such codes provide the highest-fidelity representation of system behavior, but are often so slow to run that insight into the system is limited. For example, conducting an exhaustive sweep over a d-dimensional input parameter space with k-steps along each dimension requires k(sup d) simulation trials (translating into k(sup d) CPU-days for one of our current simulations). An alternative is directed exploration in which the next simulation trials are cleverly chosen at each step. Given the results of previous trials, supervised learning techniques (SVM, KDE, GP) are applied to build up simplified predictive models of system behavior. These models are then used within an active learning framework to identify the most valuable trials to run next. Several active learning strategies are examined including a recently-proposed information-theoretic approach. Performance is evaluated on a set of thirteen synthetic oracles, which serve as surrogates for the more expensive simulations and enable the experiments to be replicated by other researchers.

  17. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  18. Fast H.264/AVC FRExt intra coding using belief propagation.

    PubMed

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  19. Current and Future Critical Issues in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.; Dix, Jeff C.

    1998-01-01

    The objective of this research was to tackle several problems that are currently of great importance to NASA. In a liquid rocket engine several complex processes take place that are not thoroughly understood. Droplet evaporation, turbulence, finite rate chemistry, instability, and injection/atomization phenomena are some of the critical issues being encountered in a liquid rocket engine environment. Pulse Detonation Engines (PDE) performance, combustion chamber instability analysis, 60K motor flowfield pattern from hydrocarbon fuel combustion, and 3D flowfield analysis for the Combined Cycle engine were of special interest to NASA. During the summer of 1997, we made an attempt to generate computational results for all of the above problems and shed some light on understanding some of the complex physical phenomena. For this purpose, the Liquid Thrust Chamber Performance (LTCP) code, mainly designed for liquid rocket engine applications, was utilized. The following test cases were considered: (1) Characterization of a detonation wave in a Pulse Detonation Tube; (2) 60K Motor wall temperature studies; (3) Propagation of a pressure pulse in a combustion chamber (under single and two-phase flow conditions); (4) Transonic region flowfield analysis affected by viscous effects; (5) Exploring the viscous differences between a smooth and a corrugated wall; and (6) 3D thrust chamber flowfield analysis of the Combined Cycle engine. It was shown that the LTCP-2D and LTCP-3D codes are capable of solving complex and stiff conservation equations for gaseous and droplet phases in a very robust and efficient manner. These codes can be run on a workstation and personal computers (PC's).

  20. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Capps, N. A.; Liu, W.; Rashid, Y. R.; Wirth, B. D.

    2016-11-01

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial ( R- Z) or plane radial-circumferential ( R- θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. In comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.

  1. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    DOE PAGES

    Williamson, R. L.; Capps, N. A.; Liu, W.; ...

    2016-09-27

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial (R-Z) ormore » plane radial-circumferential (R-θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used in this paper to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. Finally, in comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.« less

  2. Breaking the Code: The Creative Use of QR Codes to Market Extension Events

    ERIC Educational Resources Information Center

    Hill, Paul; Mills, Rebecca; Peterson, GaeLynn; Smith, Janet

    2013-01-01

    The use of smartphones has drastically increased in recent years, heralding an explosion in the use of QR codes. The black and white square barcodes that link the physical and digital world are everywhere. These simple codes can provide many opportunities to connect people in the physical world with many of Extension online resources. The…

  3. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  4. Coupling physically based and data-driven models for assessing freshwater inflow into the Small Aral Sea

    NASA Astrophysics Data System (ADS)

    Ayzel, Georgy; Izhitskiy, Alexander

    2018-06-01

    The Aral Sea desiccation and related changes in hydroclimatic conditions on a regional level is a hot topic for past decades. The key problem of scientific research projects devoted to an investigation of modern Aral Sea basin hydrological regime is its discontinuous nature - the only limited amount of papers takes into account the complex runoff formation system entirely. Addressing this challenge we have developed a continuous prediction system for assessing freshwater inflow into the Small Aral Sea based on coupling stack of hydrological and data-driven models. Results show a good prediction skill and approve the possibility to develop a valuable water assessment tool which utilizes the power of classical physically based and modern machine learning models both for territories with complex water management system and strong water-related data scarcity. The source code and data of the proposed system is available on a Github page (https://github.com/SMASHIproject/IWRM2018).

  5. Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay Receiver Design Issues

    DTIC Science & Technology

    2011-03-01

    222 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 9, SEPTEMBER 2011 2595 Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay... noncoherent reception, channel estima- tion. I. INTRODUCTION IN the two-way relay channel (TWRC), a pair of sourceterminals exchange information...2011 4. TITLE AND SUBTITLE Noncoherent Physical-Layer Network Coding with FSK Modulation:Relay Receiver Design Issues 5a. CONTRACT NUMBER 5b

  6. Coupled Hydrodynamic and Wave Propagation Modeling for the Source Physics Experiment: Study of Rg Wave Sources for SPE and DAG series.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.

    2017-12-01

    This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.

  7. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  8. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  9. Concatenated coding for low date rate space communications.

    NASA Technical Reports Server (NTRS)

    Chen, C. H.

    1972-01-01

    In deep space communications with distant planets, the data rate as well as the operating SNR may be very low. To maintain the error rate also at a very low level, it is necessary to use a sophisticated coding system (longer code) without excessive decoding complexity. The concatenated coding has been shown to meet such requirements in that the error rate decreases exponentially with the overall length of the code while the decoder complexity increases only algebraically. Three methods of concatenating an inner code with an outer code are considered. Performance comparison of the three concatenated codes is made.

  10. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  11. Multi-threading performance of Geant4, MCNP6, and PHITS Monte Carlo codes for tetrahedral-mesh geometry.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Shin, Bangho; Kim, Chan Hyeong; Furuta, Takuya

    2018-05-04

    In this study, the multi-threading performance of the Geant4, MCNP6, and PHITS codes was evaluated as a function of the number of threads (N) and the complexity of the tetrahedral-mesh phantom. For this, three tetrahedral-mesh phantoms of varying complexity (simple, moderately complex, and highly complex) were prepared and implemented in the three different Monte Carlo codes, in photon and neutron transport simulations. Subsequently, for each case, the initialization time, calculation time, and memory usage were measured as a function of the number of threads used in the simulation. It was found that for all codes, the initialization time significantly increased with the complexity of the phantom, but not with the number of threads. Geant4 exhibited much longer initialization time than the other codes, especially for the complex phantom (MRCP). The improvement of computation speed due to the use of a multi-threaded code was calculated as the speed-up factor, the ratio of the computation speed on a multi-threaded code to the computation speed on a single-threaded code. Geant4 showed the best multi-threading performance among the codes considered in this study, with the speed-up factor almost linearly increasing with the number of threads, reaching ~30 when N  =  40. PHITS and MCNP6 showed a much smaller increase of the speed-up factor with the number of threads. For PHITS, the speed-up factors were low when N  =  40. For MCNP6, the increase of the speed-up factors was better, but they were still less than ~10 when N  =  40. As for memory usage, Geant4 was found to use more memory than the other codes. In addition, compared to that of the other codes, the memory usage of Geant4 more rapidly increased with the number of threads, reaching as high as ~74 GB when N  =  40 for the complex phantom (MRCP). It is notable that compared to that of the other codes, the memory usage of PHITS was much lower, regardless of both the complexity of the phantom and the number of threads, hardly increasing with the number of threads for the MRCP.

  12. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Michael T.; Safdari, Masoud; Kress, Jessica E.

    The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enablemore » coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site. There are over 100 unit tests provided that run through the Illinois Rocstar Application Development (IRAD) lightweight testing infrastructure that is also supplied along with IMPACT. The package as a whole provides an excellent base for developing high-quality multiphysics applications using modern software development practices. To facilitate understanding how to utilize IMPACT effectively, two multiphysics systems have been developed and are available open-source through gitHUB. The simpler of the two systems, named ElmerFoamFSI in the repository, is a multiphysics, fluid-structure-interaction (FSI) coupling of the solid mechanics package Elmer with a fluid dynamics module from OpenFOAM. This coupling illustrates how to combine software packages that are unrelated by either author or architecture and combine them into a robust, parallel multiphysics system. A more complex multiphysics tool is the Illinois Rocstar Rocstar Multiphysics code that was rebuilt during the project around IMPACT. Rocstar Multiphysics was already an HPC multiphysics tool, but now that it has been rearchitected around IMPACT, it can be readily expanded to capture new and different physics in the future. In fact, during this project, the Elmer and OpenFOAM tools were also coupled into Rocstar Multiphysics and demonstrated. The full Rocstar Multiphysics codebase is also available on gitHUB, and licensed for any organization to use as they wish. Finally, the new IMPACT product is already being used in several multiphysics code coupling projects for the Air Force, NASA and the Missile Defense Agency, and initial work on expansion of the IMPACT-enabled Rocstar Multiphysics has begun in support of a commercial company. These initiatives promise to expand the interest and reach of IMPACT and Rocstar Multiphysics, ultimately leading to the envisioned standardization and consortium of users that was one of the goals of this project.« less

  13. Numerical uncertainty in computational engineering and physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts ofmore » consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.« less

  14. An Interactive and Comprehensive Working Environment for High-Energy Physics Software with Python and Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Braun, N.; Hauth, T.; Pulvermacher, C.; Ritter, M.

    2017-10-01

    Today’s analyses for high-energy physics (HEP) experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts - often written in Python or ROOT macros which call complex compiled algorithms in the background - to perform fitting procedures and generate plots. During recent years interactive programming environments, such as Jupyter, became popular. Jupyter allows to develop Python-based applications, so-called notebooks, which bundle code, documentation and results, e.g. plots. Advantages over classical script-based approaches is the feature to recompute only parts of the analysis code, which allows for fast and iterative development, and a web-based user frontend, which can be hosted centrally and only requires a browser on the user side. In our novel approach, Python and Jupyter are tightly integrated into the Belle II Analysis Software Framework (basf2), currently being developed for the Belle II experiment in Japan. This allows to develop code in Jupyter notebooks for every aspect of the event simulation, reconstruction and analysis chain. These interactive notebooks can be hosted as a centralized web service via jupyterhub with docker and used by all scientists of the Belle II Collaboration. Because of its generality and encapsulation, the setup can easily be scaled to large installations.

  15. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  16. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  17. Coding for Language Complexity: The Interplay among Methodological Commitments, Tools, and Workflow in Writing Research

    ERIC Educational Resources Information Center

    Geisler, Cheryl

    2018-01-01

    Coding, the analytic task of assigning codes to nonnumeric data, is foundational to writing research. A rich discussion of methodological pluralism has established the foundational importance of systematicity in the task of coding, but less attention has been paid to the equally important commitment to language complexity. Addressing the interplay…

  18. NSR&D FY17 Report: CartaBlanca Capability Enhancements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Christopher Curtis; Dhakal, Tilak Raj; Zhang, Duan Zhong

    Over the last several years, particle technology in the CartaBlanca code has been matured and has been successfully applied to a wide variety of physical problems. It has been shown that the particle methods, especially Los Alamos's dual domain material point method, is capable of computing many problems involves complex physics, chemistries accompanied by large material deformations, where the traditional finite element or Eulerian method encounter significant difficulties. In FY17, the CartaBlanca code has been enhanced with physical models and numerical algorithms. We started out to compute penetration and HE safety problems. Most of the year we focused on themore » TEPLA model improvement testing against the sweeping wave experiment by Gray et al., because it was found that pore growth and material failure are essentially important for our tasks and needed to be understood for modeling the penetration and the can experiments efficiently. We extended the TEPLA mode from the point view of ensemble phase average to include the effects of nite deformation. It is shown that the assumed pore growth model in TEPLA is actually an exact result from the theory. Alone this line, we then generalized the model to include finite deformations to consider nonlinear dynamics of large deformation. The interaction between the HE product gas and the solid metal is based on the multi-velocity formation. Our preliminary numerical results suggest good agreement between the experiment and the numerical results, pending further verification. To improve the parallel processing capabilities of the CartaBlanca code, we are actively working with the Next Generation Code (NGC) project to rewrite selected packages using C++. This work is expected to continue in the following years. This effort also makes the particle technology developed with CartaBlanca project available to other part of the laboratory. Working with the NGC project and rewriting some parts of the code also given us an opportunity to improve our numerical implementations of the method and to take advantage of recently advances in the numerical methods, such as multiscale algorithms.« less

  19. Diet and Physical Activity Intervention Strategies for College Students

    PubMed Central

    Martinez, Yannica Theda S.; Harmon, Brook E.; Bantum, Erin O.; Strayhorn, Shaila

    2016-01-01

    Objectives To understand perceived barriers of a diverse sample of college students and their suggestions for interventions aimed at healthy eating, cooking, and physical activity. Methods Forty students (33% Asian American, 30% mixed ethnicity) were recruited. Six focus groups were audio-recorded, transcribed, and coded. Coding began with a priori codes, but allowed for additional codes to emerge. Analysis of questionnaires on participants’ dietary and physical activity practices and behaviors provided context for qualitative findings. Results Barriers included time, cost, facility quality, and intimidation. Tailoring towards a college student’s lifestyle, inclusion of hands-on skill building, and online support and resources were suggested strategies. Conclusions Findings provide direction for diet and physical activity interventions and policies aimed at college students. PMID:28480225

  20. ALICE: A non-LTE plasma atomic physics, kinetics and lineshape package

    NASA Astrophysics Data System (ADS)

    Hill, E. G.; Pérez-Callejo, G.; Rose, S. J.

    2018-03-01

    All three parts of an atomic physics, atomic kinetics and lineshape code, ALICE, are described. Examples of the code being used to model the emissivity and opacity of plasmas are discussed and interesting features of the code which build on the existing corpus of models are shown throughout.

  1. PopCORN: Hunting down the differences between binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.

    2014-02-01

    Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org

  2. Comparison of PASCAL and FORTRAN for solving problems in the physical sciences

    NASA Technical Reports Server (NTRS)

    Watson, V. R.

    1981-01-01

    The paper compares PASCAL and FORTRAN for problem solving in the physical sciences, due to requests NASA has received to make PASCAL available on the Numerical Aerodynamic Simulator (scheduled to be operational in 1986). PASCAL disadvantages include the lack of scientific utility procedures equivalent to the IBM scientific subroutine package or the IMSL package which are available in FORTRAN. Advantages include a well-organized, easy to read and maintain writing code, range checking to prevent errors, and a broad selection of data types. It is concluded that FORTRAN may be the better language, although ADA (patterned after PASCAL) may surpass FORTRAN due to its ability to add complex and vector math, and the specify the precision and range of variables.

  3. High accuracy mantle convection simulation through modern numerical methods - II: realistic models and problems

    NASA Astrophysics Data System (ADS)

    Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang

    2017-08-01

    Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.

  4. When Homoplasy Is Not Homoplasy: Dissecting Trait Evolution by Contrasting Composite and Reductive Coding.

    PubMed

    Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga

    2018-05-01

    The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.

  5. Hybrid and concatenated coding applications.

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Odenwalder, J. P.

    1972-01-01

    Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.

  6. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  7. Unobtrusive Software and System Health Management with R2U2 on a Parallel MIMD Coprocessor

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Moosbrugger, Patrick

    2017-01-01

    Dynamic monitoring of software and system health of a complex cyber-physical system requires observers that continuously monitor variables of the embedded software in order to detect anomalies and reason about root causes. There exists a variety of techniques for code instrumentation, but instrumentation might change runtime behavior and could require costly software re-certification. In this paper, we present R2U2E, a novel realization of our real-time, Realizable, Responsive, and Unobtrusive Unit (R2U2). The R2U2E observers are executed in parallel on a dedicated 16-core EPIPHANY co-processor, thereby avoiding additional computational overhead to the system under observation. A DMA-based shared memory access architecture allows R2U2E to operate without any code instrumentation or program interference.

  8. Systems science and obesity policy: a novel framework for analyzing and rethinking population-level planning.

    PubMed

    Johnston, Lee M; Matteson, Carrie L; Finegood, Diane T

    2014-07-01

    We demonstrate the use of a systems-based framework to assess solutions to complex health problems such as obesity. We coded 12 documents published between 2004 and 2013 aimed at influencing obesity planning for complex systems design (9 reports from US and Canadian governmental or health authorities, 1 Cochrane review, and 2 Institute of Medicine reports). We sorted data using the intervention-level framework (ILF), a novel solutions-oriented approach to complex problems. An in-depth comparison of 3 documents provides further insight into complexity and systems design in obesity policy. The majority of strategies focused mainly on changing the determinants of energy imbalance (food intake and physical activity). ILF analysis brings to the surface actions aimed at higher levels of system function and points to a need for more innovative policy design. Although many policymakers acknowledge obesity as a complex problem, many strategies stem from the paradigm of individual choice and are limited in scope. The ILF provides a template to encourage natural systems thinking and more strategic policy design grounded in complexity science.

  9. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  10. Dietary assessment of British police force employees: a description of diet record coding procedures and cross-sectional evaluation of dietary energy intake reporting (The Airwave Health Monitoring Study)

    PubMed Central

    Gibson, Rachel; Eriksen, Rebeca; Lamb, Kathryn; McMeel, Yvonne; Vergnaud, Anne-Claire; Spear, Jeanette; Aresu, Maria; Chan, Queenie; Elliott, Paul; Frost, Gary

    2017-01-01

    Objectives Dietary intake is a key aspect of occupational health. To capture the characteristics of dietary behaviour that is affected by occupational environment that may affect disease risk, a collection of prospective multiday dietary records is required. The aims of this paper are to: (1) collect multiday dietary data in the Airwave Health Monitoring Study, (2) describe the dietary coding procedures applied and (3) investigate the plausibility of dietary reporting in this occupational cohort. Design A dietary coding protocol for this large-scale study was developed to minimise coding error rate. Participants (n 4412) who completed 7-day food records were included for cross-sectional analyses. Energy intake (EI) misreporting was estimated using the Goldberg method. Multivariate logistic regression models were applied to determine participant characteristics associated with EI misreporting. Setting British police force employees enrolled (2007–2012) into the Airwave Health Monitoring Study. Results The mean code error rate per food diary was 3.7% (SD 3.2%). The strongest predictors of EI under-reporting were body mass index (BMI) and physical activity. Compared with participants with BMI<25 kg/m2, those with BMI>30 kg/m2 had increased odds of being classified as under-reporting EI (men OR 5.20 95% CI 3.92 to 6.89; women OR 2.66 95% CI 1.85 to 3.83). Men and women in the highest physical activity category compared with the lowest were also more likely to be classified as under-reporting (men OR 3.33 95% CI 2.46 to 4.50; women OR 4.34 95% CI 2.91 to 6.55). Conclusions A reproducible dietary record coding procedure has been developed to minimise coding error in complex 7-day diet diaries. The prevalence of EI under-reporting is comparable with existing national UK cohorts and, in agreement with previous studies, classification of under-reporting was biased towards specific subgroups of participants. PMID:28377391

  11. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  12. Wind-US Users Guide Version 3.0

    NASA Technical Reports Server (NTRS)

    Yoder, Dennis A.

    2016-01-01

    Wind-US is a computational platform which may be used to numerically solve various sets of equations governing physical phenomena. Currently, the code supports the solution of the Euler and Navier-Stokes equations of fluid mechanics, along with supporting equation sets governing turbulent and chemically reacting flows. Wind-US is a product of the NPARC Alliance, a partnership between the NASA Glenn Research Center (GRC) and the Arnold Engineering Development Complex (AEDC) dedicated to the establishment of a national, applications-oriented flow simulation capability. The Boeing Company has also been closely associated with the Alliance since its inception, and represents the interests of the NPARC User's Association. The "Wind-US User's Guide" describes the operation and use of Wind-US, including: a basic tutorial; the physical and numerical models that are used; the boundary conditions; monitoring convergence; the files that are read and/or written; parallel execution; and a complete list of input keywords and test options. For current information about Wind-US and the NPARC Alliance, please see the Wind-US home page at http://www.grc.nasa.gov/WWW/winddocs/ and the NPARC Alliance home page at http://www.grc.nasa.gov/WWW/wind/. This manual describes the operation and use of Wind-US, a computational platform which may be used to numerically solve various sets of equations governing physical phenomena. Wind-US represents a merger of the capabilities of four CFD codes - NASTD (a structured grid flow solver developed at McDonnell Douglas, now part of Boeing), NPARC (the original NPARC Alliance structured grid flow solver), NXAIR (an AEDC structured grid code used primarily for store separation analysis), and ICAT (an unstructured grid flow solver developed at the Rockwell Science Center and Boeing).

  13. A software framework for pipelined arithmetic algorithms in field programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Kim, J. B.; Won, E.

    2018-03-01

    Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.

  14. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  15. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  16. Inter-view prediction of intra mode decision for high-efficiency video coding-based multiview video coding

    NASA Astrophysics Data System (ADS)

    da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.

    2014-05-01

    Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.

  17. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  18. Path Toward a Unified Geometry for Radiation Transport

    NASA Astrophysics Data System (ADS)

    Lee, Kerry

    The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex CAD models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN (high charge and energy transport code developed by NASA LaRC), are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The work-flow for doing radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats.

  19. The design of dual-mode complex signal processors based on quadratic modular number codes

    NASA Astrophysics Data System (ADS)

    Jenkins, W. K.; Krogmeier, J. V.

    1987-04-01

    It has been known for a long time that quadratic modular number codes admit an unusual representation of complex numbers which leads to complete decoupling of the real and imaginary channels, thereby simplifying complex multiplication and providing error isolation between the real and imaginary channels. This paper first presents a tutorial review of the theory behind the different types of complex modular rings (fields) that result from particular parameter selections, and then presents a theory for a 'dual-mode' complex signal processor based on the choice of augmented power-of-2 moduli. It is shown how a diminished-1 binary code, used by previous designers for the realization of Fermat number transforms, also leads to efficient realizations for dual-mode complex arithmetic for certain augmented power-of-2 moduli. Then a design is presented for a recursive complex filter based on a ROM/ACCUMULATOR architecture and realized in an augmented power-of-2 quadratic code, and a computer-generated example of a complex recursive filter is shown to illustrate the principles of the theory.

  20. Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study

    PubMed Central

    Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.

    2012-01-01

    Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242

  1. Status of LANL Efforts to Effectively Use Sequoia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William David

    2015-05-14

    Los Alamos National Laboratory (LANL) is currently working on 3 new production applications, VPC, xRage, and Pagosa. VPIC was designed to be a 3D relativist, electromagnetic Particle-In-Cell code for plasma simulation. xRage, a 3D AMR mesh amd multi physics hydro code. Pagosa, is a 3D structured mesh and multi physics hydro code.

  2. Braiding by Majorana tracking and long-range CNOT gates with color codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2017-11-01

    Color-code quantum computation seamlessly combines Majorana-based hardware with topological error correction. Specifically, as Clifford gates are transversal in two-dimensional color codes, they enable the use of the Majoranas' non-Abelian statistics for gate operations at the code level. Here, we discuss the implementation of color codes in arrays of Majorana nanowires that avoid branched networks such as T junctions, thereby simplifying their realization. We show that, in such implementations, non-Abelian statistics can be exploited without ever performing physical braiding operations. Physical braiding operations are replaced by Majorana tracking, an entirely software-based protocol which appropriately updates the Majoranas involved in the color-code stabilizer measurements. This approach minimizes the required hardware operations for single-qubit Clifford gates. For Clifford completeness, we combine color codes with surface codes, and use color-to-surface-code lattice surgery for long-range multitarget CNOT gates which have a time overhead that grows only logarithmically with the physical distance separating control and target qubits. With the addition of magic state distillation, our architecture describes a fault-tolerant universal quantum computer in systems such as networks of tetrons, hexons, or Majorana box qubits, but can also be applied to nontopological qubit platforms.

  3. AnisoVis: a MATLAB™ toolbox for the visualisation of elastic anisotropy

    NASA Astrophysics Data System (ADS)

    Healy, D.; Timms, N.; Pearce, M. A.

    2016-12-01

    The elastic properties of rocks and minerals vary with direction, and this has significant consequences for their physical response to acoustic waves and natural or imposed stresses. This anisotropy of elasticity is well described mathematically by 4th rank tensors of stiffness or compliance. These tensors are not easy to visualise in a single diagram or graphic, and visualising Poisson's ratio and shear modulus presents a further challenge in that their anisotropy depends on two principal directions. Students and researchers can easily underestimate the importance of elastic anisotropy. This presentation describes an open source toolbox of MATLAB scripts that aims to visualise elastic anisotropy in rocks and minerals. The code produces linked 2-D and 3-D representations of the standard elastic constants, such as Young's modulus, Poisson's ratio and shear modulus, all from a simple GUI. The 3-D plots can be manipulated by the user (rotated, panned, zoomed), to encourage investigation and a deeper understanding of directional variations in the fundamental properties. Examples are presented of common rock forming minerals, including those with negative Poisson's ratio (auxetic behaviour). We hope that an open source code base will encourage further enhancements from the rock physics and wider geoscience communities. Eventually, we hope to generate 3-D prints of these complex and beautiful natural surfaces to provide a tactile link to the underlying physics of elastic anisotropy.

  4. Beyond Molecular Codes: Simple Rules to Wire Complex Brains

    PubMed Central

    Hassan, Bassem A.; Hiesinger, P. Robin

    2015-01-01

    Summary Molecular codes, like postal zip codes, are generally considered a robust way to ensure the specificity of neuronal target selection. However, a code capable of unambiguously generating complex neural circuits is difficult to conceive. Here, we re-examine the notion of molecular codes in the light of developmental algorithms. We explore how molecules and mechanisms that have been considered part of a code may alternatively implement simple pattern formation rules sufficient to ensure wiring specificity in neural circuits. This analysis delineates a pattern-based framework for circuit construction that may contribute to our understanding of brain wiring. PMID:26451480

  5. "They put you on your toes": Physical Therapists' Perceived Benefits from and Barriers to Supervising Students in the Clinical Setting.

    PubMed

    Davies, Robyn; Hanna, Elizabeth; Cott, Cheryl

    2011-01-01

    To identify the perceived benefits of and barriers to clinical supervision of physical therapy (PT) students. In this qualitative descriptive study, three focus groups and six key-informant interviews were conducted with clinical physical therapists or administrators working in acute care, orthopaedic rehabilitation, or complex continuing care. Data were coded and analyzed for common ideas using a constant comparison approach. Perceived barriers to supervising students tended to be extrinsic: time and space constraints, challenging or difficult students, and decreased autonomy or flexibility for the clinical physical therapists. Benefits tended to be intrinsic: teaching provided personal gratification by promoting reflective practice and exposing clinical educators to current knowledge. The culture of different health care institutions was an important factor in therapists' perceptions of student supervision. Despite different disciplines and models of supervision, there is considerable synchronicity in the issues reported by physical therapists and other disciplines. Embedding the value of clinical teaching in the institution, along with strong communication links among academic partners, institutions, and potential clinical faculty, may mitigate barriers and increase the commitment and satisfaction of teaching staff.

  6. Metrics and tools for consistent cohort discovery and financial analyses post-transition to ICD-10-CM

    PubMed Central

    Boyd, Andrew D; ‘John’ Li, Jianrong; Kenost, Colleen; Joese, Binoy; Min Yang, Young; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A

    2015-01-01

    In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as “convoluted” by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: “identity” (reciprocal), “class-to-subclass,” “subclass-to-class,” “convoluted,” or “no mapping.” These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible. Web portal: http://www.lussierlab.org/transition-to-ICD9CM/ Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9 PMID:25681260

  7. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  8. Towards measuring the semantic capacity of a physical medium demonstrated with elementary cellular automata.

    PubMed

    Dittrich, Peter

    2018-02-01

    The organic code concept and its operationalization by molecular codes have been introduced to study the semiotic nature of living systems. This contribution develops further the idea that the semantic capacity of a physical medium can be measured by assessing its ability to implement a code as a contingent mapping. For demonstration and evaluation, the approach is applied to a formal medium: elementary cellular automata (ECA). The semantic capacity is measured by counting the number of ways codes can be implemented. Additionally, a link to information theory is established by taking multivariate mutual information for quantifying contingency. It is shown how ECAs differ in their semantic capacities, how this is related to various ECA classifications, and how this depends on how a meaning is defined. Interestingly, if the meaning should persist for a certain while, the highest semantic capacity is found in CAs with apparently simple behavior, i.e., the fixed-point and two-cycle class. Synergy as a predictor for a CA's ability to implement codes can only be used if context implementing codes are common. For large context spaces with sparse coding contexts synergy is a weak predictor. Concluding, the approach presented here can distinguish CA-like systems with respect to their ability to implement contingent mappings. Applying this to physical systems appears straight forward and might lead to a novel physical property indicating how suitable a physical medium is to implement a semiotic system. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The origin of neutron biological effectiveness as a function of energy.

    PubMed

    Baiocco, G; Barbieri, S; Babini, G; Morini, J; Alloni, D; Friedland, W; Kundrát, P; Schmitt, E; Puchalska, M; Sihver, L; Ottolenghi, A

    2016-09-22

    The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data.

  10. The origin of neutron biological effectiveness as a function of energy

    NASA Astrophysics Data System (ADS)

    Baiocco, G.; Barbieri, S.; Babini, G.; Morini, J.; Alloni, D.; Friedland, W.; Kundrát, P.; Schmitt, E.; Puchalska, M.; Sihver, L.; Ottolenghi, A.

    2016-09-01

    The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data.

  11. Xpatch prediction improvements to support multiple ATR applications

    NASA Astrophysics Data System (ADS)

    Andersh, Dennis J.; Lee, Shung W.; Moore, John T.; Sullivan, Douglas P.; Hughes, Jeff A.; Ling, Hao

    1998-08-01

    This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time-domain signature sand synthetic aperture radar (SAR) images of realistic 3D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, IGES curved surfaces, or solid geometries.The computer code, Xpatch, based on the shooting-and-bouncing-ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. Xpatch computers the first- bounce physical optics (PO) plus the physical theory of diffraction (PTD) contributions. Xpatch calculates the multi-bounce ray contributions by using geometric optics and PO for complex vehicles with materials. It has been found that the multi-bounce calculations, the radar return in typically 10 to 15 dB too low. Examples of predicted range profiles, SAR, imagery, and RCS for several different geometries are compared with measured data to demonstrate the quality of the predictions. Recent enhancements to Xpatch include improvements for millimeter wave applications and hybridization with finite element method for small geometric features and augmentation of additional IGES entities to support trimmed and untrimmed surfaces.

  12. The origin of neutron biological effectiveness as a function of energy

    PubMed Central

    Baiocco, G.; Barbieri, S.; Babini, G.; Morini, J.; Alloni, D.; Friedland, W.; Kundrát, P.; Schmitt, E.; Puchalska, M.; Sihver, L.; Ottolenghi, A.

    2016-01-01

    The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data. PMID:27654349

  13. First shock tuning and backscatter measurements for large case-to-capsule ratio beryllium targets

    NASA Astrophysics Data System (ADS)

    Loomis, Eric; Yi, Austin; Kline, John; Kyrala, George; Simakov, Andrei; Wilson, Doug; Ralph, Joe; Dewald, Eduard; Strozzi, David; Celliers, Peter; Millot, Marius; Tommasini, Riccardo

    2016-10-01

    The current under performance of target implosions on the National Ignition Facility (NIF) has necessitated scaling back from high convergence ratio to access regimes of reduced physics uncertainties. These regimes, we expect, are more predictable by existing radiation hydrodynamics codes giving us a better starting point for isolating key physics questions. One key question is the lack of predictable in-flight and hot spot shape due to a complex hohlraum radiation environment. To achieve more predictable, shape tunable implosions we have designed and fielded a large 4.2 case-to-capsule ratio (CCR) target at the NIF using 6.72 mm diameter Au hohlraums and 1.6 mm diameter Cu-doped Be capsules. Simulations show that at these dimensions during a 10 ns 3-shock laser pulse reaching 270 eV hohlraum temperatures, the interaction between hohlraum and capsule plasma, which at lower CCR lead to beam propagation impedance by artificial plasma stagnation, are reduced. In this talk we will present measurements of early time drive symmetry using two-axis line-imaging velocimetry (VISAR) and streaked radiography measuring velocity of the imploding shell and their comparisons to post-shot calculations using the code HYDRA (Lawrence Livermore National Laboratory).

  14. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  15. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  16. Interfacing the Generalized Fluid System Simulation Program with the SINDA/G Thermal Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Palmiter, Christopher; Farmer, Jeffery; Lycans, Randall; Tiller, Bruce

    2000-01-01

    A general purpose, one dimensional fluid flow code has been interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development was conducted in two phases. This paper describes the first (which allows for steady and quasi-steady - unsteady solid, steady fluid - conjugate heat transfer modeling). The second (full transient conjugate heat transfer modeling) phase of the interface development will be addressed in a later paper. Phase 1 development has been benchmarked to an analytical solution with excellent agreement. Additional test cases for each development phase demonstrate desired features of the interface. The results of the benchmark case, three additional test cases and a practical application are presented herein.

  17. Computation of the tip vortex flowfield for advanced aircraft propellers

    NASA Technical Reports Server (NTRS)

    Tsai, Tommy M.; Dejong, Frederick J.; Levy, Ralph

    1988-01-01

    The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).

  18. A new tool for coding and interpreting injuries in fatal airplane crashes: the crash injury pattern assessment tool application to the Air France Flight AF447 disaster (Rio de Janeiro-Paris), 1st of June 2009.

    PubMed

    Schuliar, Yves; Chapenoire, Stéphane; Miras, Alain; Contrand, Benjamin; Lagarde, Emmanuel

    2014-09-01

    For investigation of air disasters, crash reconstruction is obtained using data from flight recorders, physical evidence from the site, and injuries patterns of the victims. This article describes a new software, Crash Injury Pattern Assessment Tool (CIPAT), to code and analyze injuries. The coding system was derived from the Abbreviated Injury Score (AIS). Scores were created corresponding to the amount of energy required causing the trauma (ER), and the software was developed to compute summary variables related to the position (assigned seat) of victims. A dataset was built from the postmortem examination of 154/228 victims of the Air France disaster (June 2009), recovered from the Atlantic Ocean after a complex and difficult task at a depth of 12790 ft. The use of CIPAT allowed to precise cause and circumstances of deaths and confirmed major dynamics parameters of the crash event established by the French Civil Aviation Safety Investigation Authority. © 2014 American Academy of Forensic Sciences.

  19. Dissemination and support of ARGUS for accelerator applications. Technical progress report, April 24, 1991--January 20, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  20. Aspect-Oriented Programming

    NASA Technical Reports Server (NTRS)

    Elrad, Tzilla (Editor); Filman, Robert E. (Editor); Bader, Atef (Editor)

    2001-01-01

    Computer science has experienced an evolution in programming languages and systems from the crude assembly and machine codes of the earliest computers through concepts such as formula translation, procedural programming, structured programming, functional programming, logic programming, and programming with abstract data types. Each of these steps in programming technology has advanced our ability to achieve clear separation of concerns at the source code level. Currently, the dominant programming paradigm is object-oriented programming - the idea that one builds a software system by decomposing a problem into objects and then writing the code of those objects. Such objects abstract together behavior and data into a single conceptual and physical entity. Object-orientation is reflected in the entire spectrum of current software development methodologies and tools - we have OO methodologies, analysis and design tools, and OO programming languages. Writing complex applications such as graphical user interfaces, operating systems, and distributed applications while maintaining comprehensible source code has been made possible with OOP. Success at developing simpler systems leads to aspirations for greater complexity. Object orientation is a clever idea, but has certain limitations. We are now seeing that many requirements do not decompose neatly into behavior centered on a single locus. Object technology has difficulty localizing concerns invoking global constraints and pandemic behaviors, appropriately segregating concerns, and applying domain-specific knowledge. Post-object programming (POP) mechanisms that look to increase the expressiveness of the OO paradigm are a fertile arena for current research. Examples of POP technologies include domain-specific languages, generative programming, generic programming, constraint languages, reflection and metaprogramming, feature-oriented development, views/viewpoints, and asynchronous message brokering. (Czarneclu and Eisenecker s book includes a good survey of many of these technologies).

  1. Sputtering, Plasma Chemistry, and RF Sheath Effects in Low-Temperature and Fusion Plasma Modeling

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Kruger, Scott E.; McGugan, James M.; Pankin, Alexei Y.; Roark, Christine M.; Smithe, David N.; Stoltz, Peter H.

    2016-09-01

    A new sheath boundary condition has been implemented in VSim, a plasma modeling code which makes use of both PIC/MCC and fluid FDTD representations. It enables physics effects associated with DC and RF sheath formation - local sheath potential evolution, heat/particle fluxes, and sputtering effects on complex plasma-facing components - to be included in macroscopic-scale plasma simulations that need not resolve sheath scale lengths. We model these effects in typical ICRF antenna operation scenarios on the Alcator C-Mod fusion device, and present comparisons of our simulation results with experimental data together with detailed 3D animations of antenna operation. Complex low-temperature plasma chemistry modeling in VSim is facilitated by MUNCHKIN, a standalone python/C++/SQL code that identifies possible reaction paths for a given set of input species, solves 1D rate equations for the ensuing system's chemical evolution, and generates VSim input blocks with appropriate cross-sections/reaction rates. These features, as well as principal path analysis (to reduce the number of simulated chemical reactions while retaining accuracy) and reaction rate calculations from user-specified distribution functions, will also be demonstrated. Supported by the U.S. Department of Energy's SBIR program, Award DE-SC0009501.

  2. Research Prototype: Automated Analysis of Scientific and Engineering Semantics

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.

  3. PelePhysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PelePhysics is a suite of physics packages that provides functionality of use to reacting hydrodynamics CFD codes. The initial release includes an interface to reaction rate mechanism evaluation, transport coefficient evaluation, and a generalized equation of state (EOS) facility. Both generic evaluators and interfaces to code from externally available tools (Fuego for chemical rates, EGLib for transport coefficients) are provided.

  4. Grid generation about complex three-dimensional aircraft configurations

    NASA Technical Reports Server (NTRS)

    Klopfer, Goetz H.

    1991-01-01

    The problem of obtaining three dimensional grids with sufficient resolution to resolve all the flow or other physical features of interest is addressed. The generation of a computational grid involves a series of compromises to resolve several conflicting requirements. On one hand, one would like the grid to be fine enough and not too skewed to reduce the numerical errors and to adequately resolve the pertinent physical features of the flow field about the aircraft. On the other hand, the capabilities of present or even future supercomputers are finite and the number of mesh points must be limited to a reasonable number: one which is usually much less than desired for numerical accuracy. One technique to overcome this limitation is the 'zonal' grid approach. In this method, the overall field is subdivided into smaller zones or blocks in each of which an independent grid is generated with enough grid density to resolve the flow features in that zone. The zonal boundaries or interfaces require special boundary conditions such that the conservation properties of the governing equations are observed. Much work was done in 3-D zonal approaches with nonconservative zonal interfaces. A 3-D zonal conservative interfacing method that is efficient and easy to implement was developed during the past year. During the course of the work, it became apparent that it would be much more feasible to do the conservative interfacing with cell-centered finite volume codes instead of the originally planned finite difference codes. Accordingly, the CNS code was converted to finite volume form. This new version of the code is named CNSFV. The original multi-zonal interfacing capability of the CNS code was enhanced by generalizing the procedure to allow for completely arbitrarily shaped zones with no mesh continuity between the zones. While this zoning capability works well for most flow situations, it is, however, still nonconservative. The conservative interface algorithm was also implemented but was not completely validated.

  5. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  6. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  7. HERCULES: A Pattern Driven Code Transformation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less

  8. Emodiversity and the emotional ecosystem.

    PubMed

    Quoidbach, Jordi; Gruber, June; Mikolajczak, Moïra; Kogan, Alexsandr; Kotsou, Ilios; Norton, Michael I

    2014-12-01

    [Correction Notice: An Erratum for this article was reported in Vol 143(6) of Journal of Experimental Psychology: General (see record 2014-49316-001). There is a color coding error in Figure 2. The correct color coding is explained in the erratum.] Bridging psychological research exploring emotional complexity and research in the natural sciences on the measurement of biodiversity, we introduce--and demonstrate the benefits of--emodiversity: the variety and relative abundance of the emotions that humans experience. Two cross-sectional studies across more than 37,000 respondents demonstrate that emodiversity is an independent predictor of mental and physical health--such as decreased depression and doctor's visits--over and above mean levels of positive and negative emotion. These results remained robust after controlling for gender, age, and the 5 main dimensions of personality. Emodiversity is a practically important and previously unidentified metric for assessing the health of the human emotional ecosystem. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. The eukaryotic genome is structurally and functionally more like a social insect colony than a book.

    PubMed

    Qiu, Guo-Hua; Yang, Xiaoyan; Zheng, Xintian; Huang, Cuiqin

    2017-11-01

    Traditionally, the genome has been described as the 'book of life'. However, the metaphor of a book may not reflect the dynamic nature of the structure and function of the genome. In the eukaryotic genome, the number of centrally located protein-coding sequences is relatively constant across species, but the amount of noncoding DNA increases considerably with the increase of organismal evolutional complexity. Therefore, it has been hypothesized that the abundant peripheral noncoding DNA protects the genome and the central protein-coding sequences in the eukaryotic genome. Upon comparison with the habitation, sociality and defense mechanisms of a social insect colony, it is found that the genome is similar to a social insect colony in various aspects. A social insect colony may thus be a better metaphor than a book to describe the spatial organization and physical functions of the genome. The potential implications of the metaphor are also discussed.

  10. Computational analysis of Variable Thrust Engine (VTE) performance

    NASA Technical Reports Server (NTRS)

    Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.

    1993-01-01

    The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.

  11. Enhancements to TetrUSS for NASA Constellation Program

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Frink, Neal T.; Abdol-Hamid, Khaled S.; Samareh, Jamshid A,; Parlete, Edward B.; Taft, James R.

    2011-01-01

    The NASA Constellation program is utilizing Computational Fluid Dynamics (CFD) predictions for generating aerodynamic databases and design loads for the Ares I, Ares I-X, and Ares V launch vehicles and for aerodynamic databases for the Orion crew exploration vehicle and its launch abort system configuration. This effort presents several challenges to applied aerodynamicists due to complex geometries and flow physics, as well as from the juxtaposition of short schedule program requirements with high fidelity CFD simulations. NASA TetrUSS codes (GridTool/VGRID/USM3D) have been making extensive contributions in this effort. This paper will provide an overview of several enhancements made to the various elements of TetrUSS suite of codes. Representative TetrUSS solutions for selected Constellation program elements will be shown. Best practices guidelines and scripting developed for generating TetrUSS solutions in a production environment will also be described.

  12. Internet calculations of thermodynamic properties of substances: Some problems and results

    NASA Astrophysics Data System (ADS)

    Ustyuzhanin, E. E.; Ochkov, V. F.; Shishakov, V. V.; Rykov, S. V.

    2016-11-01

    Internet resources (databases, web sites and others) on thermodynamic properties R = (p,T,s,...) of technologically important substances are analyzed. These databases put online by a number of organizations (the Joint Institute for High Temperatures of the Russian Academy of Sciences, Standartinform, the National Institute of Standards and Technology USA, the Institute for Thermal Physics of the Siberian Branch of the Russian Academy of Sciences, etc) are investigated. Software codes are elaborated in the work in forms of “client functions” those have such characteristics: (i) they are placed on a remote server, (ii) they serve as open interactive Internet resources. A client can use them for a calculation of R properties of substances. “Complex client functions” are considered. They are focused on sharing (i) software codes elaborated to design of power plants (PP) and (ii) client functions those can calculate R properties of working fluids for PP.

  13. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  14. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  15. Large calculation of the flow over a hypersonic vehicle using a GPU

    NASA Astrophysics Data System (ADS)

    Elsen, Erich; LeGresley, Patrick; Darve, Eric

    2008-12-01

    Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.

  16. The small stellated dodecahedron code and friends.

    PubMed

    Conrad, J; Chamberland, C; Breuckmann, N P; Terhal, B M

    2018-07-13

    We explore a distance-3 homological CSS quantum code, namely the small stellated dodecahedron code, for dense storage of quantum information and we compare its performance with the distance-3 surface code. The data and ancilla qubits of the small stellated dodecahedron code can be located on the edges respectively vertices of a small stellated dodecahedron, making this code suitable for three-dimensional connectivity. This code encodes eight logical qubits into 30 physical qubits (plus 22 ancilla qubits for parity check measurements) in contrast with one logical qubit into nine physical qubits (plus eight ancilla qubits) for the surface code. We develop fault-tolerant parity check circuits and a decoder for this code, allowing us to numerically assess the circuit-based pseudo-threshold.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Authors.

  17. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE PAGES

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-03-27

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  18. Fast parametric relationships for the large-scale reservoir simulation of mixed CH4-CO2 gas hydrate systems

    NASA Astrophysics Data System (ADS)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-06-01

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.

  19. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  20. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  1. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  2. Implicit time-integration method for simultaneous solution of a coupled non-linear system

    NASA Astrophysics Data System (ADS)

    Watson, Justin Kyle

    Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).

  3. Tailored Codes for Small Quantum Memories

    NASA Astrophysics Data System (ADS)

    Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.

    2017-12-01

    We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.

  4. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  5. Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.

  6. Fast ITTBC using pattern code on subband segmentation

    NASA Astrophysics Data System (ADS)

    Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa

    2000-06-01

    Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.

  7. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  8. Pre-shot simulations of far-field ground motion for the Source Physics Experiment (SPE) Explosions at the Climax Stock, Nevada National Security Site: SPE2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R J; Rodgers, A; Walter, W

    2011-10-18

    The Source Physics Experiment (SPE) is planning a 1000 kg (TNT equivalent) shot (SPE2) at the Nevada National Security Site (NNSS) in a granite borehole at a depth (canister centroid) of 45 meters. This shot follows an earlier shot of 100 kg in the same borehole at a depth 60 m. Surrounding the shotpoint is an extensive array of seismic sensors arrayed in 5 radial lines extending out 2 km to the north and east and approximately 10-15 to the south and west. Prior to SPE1, simulations using a finite difference code and a 3D numerical model based on themore » geologic setting were conducted, which predicted higher amplitudes to the south and east in the alluvium of Yucca Flat along with significant energy on the transverse components caused by scattering within the 3D volume along with some contribution by topographic scattering. Observations from the SPE1 shot largely confirmed these predictions although the ratio of transverse energy relative to the vertical and radial components was in general larger than predicted. A new set of simulations has been conducted for the upcoming SPE2 shot. These include improvements to the velocity model based on SPE1 observations as well as new capabilities added to the simulation code. The most significant is the addition of a new source model within the finite difference code by using the predicted ground velocities from a hydrodynamic code (GEODYN) as driving condition on the boundaries of a cube embedded within WPP which provides a more sophisticated source modeling capability linked directly to source site materials (e.g. granite) and type and size of source. Two sets of SPE2 simulations are conducted, one with a GEODYN source and 3D complex media (no topography node spacing of 5 m) and one with a standard isotropic pre-defined time function (3D complex media with topography, node spacing of 5 m). Results were provided as time series at specific points corresponding to sensor locations for both translational (x,y,z) and rotational components. Estimates of spectral scaling for SPE2 are provided using a modified version of the Mueller-Murphy model. An estimate of expected aftershock probabilities were also provided, based on the methodology of Ford and Walter, [2010].« less

  9. A Subband Coding Method for HDTV

    NASA Technical Reports Server (NTRS)

    Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.

  10. GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations

    NASA Astrophysics Data System (ADS)

    Antoine, Xavier; Duboscq, Romain

    2015-08-01

    GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.

  11. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less

  12. MODTRAN6: a major upgrade of the MODTRAN radiative transfer code

    NASA Astrophysics Data System (ADS)

    Berk, Alexander; Conforti, Patrick; Kennett, Rosemary; Perkins, Timothy; Hawes, Frederick; van den Bosch, Jeannette

    2014-06-01

    The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. This version of the code incorporates modern software ar- chitecture including an application programming interface, enhanced physics features including a line-by-line algorithm, a supplementary physics toolkit, and new documentation. The application programming interface has been developed for ease of integration into user applications. The MODTRAN code has been restructured towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.

  13. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  14. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  15. Space Applications of the FLUKA Monte-Carlo Code: Lunar and Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Anderson, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Elkhayari, N.; Empl, A.; Fasso, A.; Ferrari, A.; hide

    2004-01-01

    NASA has recognized the need for making additional heavy-ion collision measurements at the U.S. Brookhaven National Laboratory in order to support further improvement of several particle physics transport-code models for space exploration applications. FLUKA has been identified as one of these codes and we will review the nature and status of this investigation as it relates to high-energy heavy-ion physics.

  16. A Continuum Diffusion Model for Viscoelastic Materials

    DTIC Science & Technology

    1988-11-01

    ZIP Code) 7b. ADDRESS (CJI. Slow, and ZIP Code) Mechanics Div isi on Office of Naval Research; Code 432 Collge Satio, T as 7843800 Quincy Ave. Collge ...these studies, which involved experimental, analytical, and materials science aspects, were conducted by researchers in the fields of physical and...thermodynamics, with irreversibility stemming from the foregoing variables yr through "growth laws" that correspond to viscous resistance. The physical ageing of

  17. General Electromagnetic Model for the Analysis of Complex Systems (GEMACS) Computer Code Documentation (Version 3). Volume 3, Part 4.

    DTIC Science & Technology

    1983-09-01

    6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA

  18. Metrics and tools for consistent cohort discovery and financial analyses post-transition to ICD-10-CM.

    PubMed

    Boyd, Andrew D; Li, Jianrong John; Kenost, Colleen; Joese, Binoy; Yang, Young Min; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A

    2015-05-01

    In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as "convoluted" by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: "identity" (reciprocal), "class-to-subclass," "subclass-to-class," "convoluted," or "no mapping." These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible.Web portal: http://www.lussierlab.org/transition-to-ICD9CM/Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  19. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  20. Exploring Physics with Computer Animation and PhysGL

    NASA Astrophysics Data System (ADS)

    Bensky, T. J.

    2016-10-01

    This book shows how the web-based PhysGL programming environment (http://physgl.org) can be used to teach and learn elementary mechanics (physics) using simple coding exercises. The book's theme is that the lessons encountered in such a course can be used to generate physics-based animations, providing students with compelling and self-made visuals to aid their learning. Topics presented are parallel to those found in a traditional physics text, making for straightforward integration into a typical lecture-based physics course. Users will appreciate the ease at which compelling OpenGL-based graphics and animations can be produced using PhysGL, as well as its clean, simple language constructs. The author argues that coding should be a standard part of lower-division STEM courses, and provides many anecdotal experiences and observations, that include observed benefits of the coding work.

  1. An Analysis of Naval Officer Student Academic Performance in the Operations Analysis Curriculum in Relationship to Academic Profile Codes and other Factors.

    DTIC Science & Technology

    1985-09-01

    Code 0 Physics (Calculus-Based) or Physical Science niscioline 0----------------------------------------- lR averaqe...opportunity for fficers with inadequate math- ematical and physical science backgrounds to establish a good math foundation to be able to gualify for a...technical curricu2um [Ref. 5: page 36]. There is also a six week refresher available that is designed to rapidly cover the calculus and physics

  2. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  3. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  4. Improving performance of DS-CDMA systems using chaotic complex Bernoulli spreading codes

    NASA Astrophysics Data System (ADS)

    Farzan Sabahi, Mohammad; Dehghanfard, Ali

    2014-12-01

    The most important goal of spreading spectrum communication system is to protect communication signals against interference and exploitation of information by unintended listeners. In fact, low probability of detection and low probability of intercept are two important parameters to increase the performance of the system. In Direct Sequence Code Division Multiple Access (DS-CDMA) systems, these properties are achieved by multiplying the data information in spreading sequences. Chaotic sequences, with their particular properties, have numerous applications in constructing spreading codes. Using one-dimensional Bernoulli chaotic sequence as spreading code is proposed in literature previously. The main feature of this sequence is its negative auto-correlation at lag of 1, which with proper design, leads to increase in efficiency of the communication system based on these codes. On the other hand, employing the complex chaotic sequences as spreading sequence also has been discussed in several papers. In this paper, use of two-dimensional Bernoulli chaotic sequences is proposed as spreading codes. The performance of a multi-user synchronous and asynchronous DS-CDMA system will be evaluated by applying these sequences under Additive White Gaussian Noise (AWGN) and fading channel. Simulation results indicate improvement of the performance in comparison with conventional spreading codes like Gold codes as well as similar complex chaotic spreading sequences. Similar to one-dimensional Bernoulli chaotic sequences, the proposed sequences also have negative auto-correlation. Besides, construction of complex sequences with lower average cross-correlation is possible with the proposed method.

  5. High Order Schemes in BATS-R-US: Is it OK to Simplify Them?

    NASA Astrophysics Data System (ADS)

    Tóth, G.; Chen, Y.; van der Holst, B.; Daldorff, L. K. S.

    2014-09-01

    We describe a number of high order schemes and their simplified variants that have been implemented into the University of Michigan global magnetohydrodynamics code BATS-R-US. We compare the various schemes with each other and the legacy 2nd order TVD scheme for various test problems and two space physics applications. We find that the simplified schemes are often quite competitive with the more complex and expensive full versions, despite the fact that the simplified versions are only high order accurate for linear systems of equations. We find that all the high order schemes require some fixes to ensure positivity in the space physics applications. On the other hand, they produce superior results as compared with the second order scheme and/or produce the same quality of solution at a much reduced computational cost.

  6. Nuclear Physics Meets the Sources of the Ultra-High Energy Cosmic Rays.

    PubMed

    Boncioli, Denise; Fedynitch, Anatoli; Winter, Walter

    2017-07-07

    The determination of the injection composition of cosmic ray nuclei within astrophysical sources requires sufficiently accurate descriptions of the source physics and the propagation - apart from controlling astrophysical uncertainties. We therefore study the implications of nuclear data and models for cosmic ray astrophysics, which involves the photo-disintegration of nuclei up to iron in astrophysical environments. We demonstrate that the impact of nuclear model uncertainties is potentially larger in environments with non-thermal radiation fields than in the cosmic microwave background. We also study the impact of nuclear models on the nuclear cascade in a gamma-ray burst radiation field, simulated at a level of complexity comparable to the most precise cosmic ray propagation code. We conclude with an isotope chart describing which information is in principle necessary to describe nuclear interactions in cosmic ray sources and propagation.

  7. SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems

    NASA Astrophysics Data System (ADS)

    Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.

    2018-03-01

    We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.

  8. Path Toward a Unifid Geometry for Radiation Transport

    NASA Technical Reports Server (NTRS)

    Lee, Kerry; Barzilla, Janet; Davis, Andrew; Zachmann

    2014-01-01

    The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex computer-aided design (CAD) models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN [high charge and energy transport code developed by NASA Langley Research Center (LaRC)], are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit-specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The workflow for achieving radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats

  9. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  10. Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.

    PubMed

    Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.

  11. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  12. Numerical Simulation of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Chernobrovkin, A. A.; Lakshiminarayana, B.

    1999-01-01

    An unsteady, multiblock, Reynolds Averaged Navier Stokes solver based on Runge-Kutta scheme and Pseudo-time step for turbo-machinery applications was developed. The code was validated and assessed against analytical and experimental data. It was used to study a variety of physical mechanisms of unsteady, three-dimensional, turbulent, transitional, and cooling flows in compressors and turbines. Flow over a cylinder has been used to study effects of numerical aspects on accuracy of prediction of wake decay and transition, and to modify K-epsilon models. The following simulations have been performed: (a) Unsteady flow in a compressor cascade: Three low Reynolds number turbulence models have been assessed and data compared with Euler/boundary layer predictions. Major flow features associated with wake induced transition were predicted and studied; (b) Nozzle wake-rotor interaction in a turbine: Results compared to LDV data in design and off-design conditions, and cause and effect of unsteady flow in turbine rotors were analyzed; (c) Flow in the low-pressure turbine: Assessed capability of the code to predict transitional, attached and separated flows at a wide range of low Reynolds numbers and inlet freestream turbulence intensity. Several turbulence and transition models have been employed and comparisons made to experiments; (d) leading edge film cooling at compound angle: Comparisons were made with experiments, and the flow physics of the associated vortical structures were studied; and (e) Tip leakage flow in a turbine. The physics of the secondary flow in a rotor was studied and sources of loss identified.

  13. Flexible Generation of Kalman Filter Code

    NASA Technical Reports Server (NTRS)

    Richardson, Julian; Wilson, Edward

    2006-01-01

    Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator

  14. A Non-Degenerate Code of Deleterious Variants in Mendelian Loci Contributes to Complex Disease Risk

    PubMed Central

    Blair, David R.; Lyttle, Christopher S.; Mortensen, Jonathan M.; Bearden, Charles F.; Jensen, Anders Boeck; Khiabanian, Hossein; Melamed, Rachel; Rabadan, Raul; Bernstam, Elmer V.; Brunak, Søren; Jensen, Lars Juhl; Nicolae, Dan; Shah, Nigam H.; Grossman, Robert L.; Cox, Nancy J.; White, Kevin P.; Rzhetsky, Andrey

    2013-01-01

    Summary Whereas countless highly penetrant variants have been associated with Mendelian disorders, the genetic etiologies underlying complex diseases remain largely unresolved. Here, we examine the extent to which Mendelian variation contributes to complex disease risk by mining the medical records of over 110 million patients. We detect thousands of associations between Mendelian and complex diseases, revealing a non-degenerate, phenotypic code that links each complex disorder to a unique collection of Mendelian loci. Using genome-wide association results, we demonstrate that common variants associated with complex diseases are enriched in the genes indicated by this “Mendelian code.” Finally, we detect hundreds of comorbidity associations among Mendelian disorders, and we use probabilistic genetic modeling to demonstrate that Mendelian variants likely contribute non-additively to the risk for a subset of complex diseases. Overall, this study illustrates a complementary approach for mapping complex disease loci and provides unique predictions concerning the etiologies of specific diseases. PMID:24074861

  15. Experimental benchmark for an improved simulation of absolute soft-x-ray emission from polystyrene targets irradiated with the Nike laser.

    PubMed

    Weaver, J L; Busquet, M; Colombant, D G; Mostovych, A N; Feldman, U; Klapisch, M; Seely, J F; Brown, C; Holland, G

    2005-02-04

    Absolutely calibrated, time-resolved spectral intensity measurements of soft-x-ray emission (hnu approximately 0.1-1.0 keV) from laser-irradiated polystyrene targets are compared to radiation-hydrodynamic simulations that include our new postprocessor, Virtual Spectro. This new capability allows a unified, detailed treatment of atomic physics and radiative transfer in nonlocal thermodynamic equilibrium conditions for simple spectra from low-Z materials as well as complex spectra from high-Z materials. The excellent agreement (within a factor of approximately 1.5) demonstrates the powerful predictive capability of the codes for the complex conditions in the ablating plasma. A comparison to data with high spectral resolution (E/deltaE approximately 1000) emphasizes the importance of including radiation coupling in the quantitative simulation of emission spectra.

  16. Simulation Based Earthquake Forecasting with RSQSim

    NASA Astrophysics Data System (ADS)

    Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.

    2016-12-01

    We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.

  17. The GBS code for tokamak scrape-off layer simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halpern, F.D., E-mail: federico.halpern@epfl.ch; Ricci, P.; Jolliet, S.

    2016-06-15

    We describe a new version of GBS, a 3D global, flux-driven plasma turbulence code to simulate the turbulent dynamics in the tokamak scrape-off layer (SOL), superseding the code presented by Ricci et al. (2012) [14]. The present work is driven by the objective of studying SOL turbulent dynamics in medium size tokamaks and beyond with a high-fidelity physics model. We emphasize an intertwining framework of improved physics models and the computational improvements that allow them. The model extensions include neutral atom physics, finite ion temperature, the addition of a closed field line region, and a non-Boussinesq treatment of the polarizationmore » drift. GBS has been completely refactored with the introduction of a 3-D Cartesian communicator and a scalable parallel multigrid solver. We report dramatically enhanced parallel scalability, with the possibility of treating electromagnetic fluctuations very efficiently. The method of manufactured solutions as a verification process has been carried out for this new code version, demonstrating the correct implementation of the physical model.« less

  18. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less

  19. Mastery motivation in children with complex communication needs: longitudinal data analysis.

    PubMed

    Medeiros, Kara F; Cress, Cynthia J; Lambert, Matthew C

    2016-09-01

    This study compared longitudinal changes in mastery motivation during parent-child free play for 37 children with complex communication needs. Mastery motivation manifests as a willingness to work hard at tasks that are challenging, which is an important quality to overcoming the challenges involved in successful expressive communication using AAC. Unprompted parent-child play episodes were identified in three assessment sessions over an 18-month period and coded for nine categories of mastery motivation in social and object play. All of the object-oriented mastery motivation categories and one social mastery motivation category showed an influence of motor skills after controlling for receptive language. Object play elicited significantly more of all of the object-focused mastery motivation categories than social play, and social play elicited more of one type of social-focused mastery motivation behavior than object play. Mastery motivation variables did not differ significantly over time for children. Potential physical and interpersonal influences on mastery motivation for parents and children with complex communication needs are discussed, including broadening the procedures and definitions of mastery motivation beyond object-oriented measurements for children with complex communication needs.

  20. Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation

    NASA Astrophysics Data System (ADS)

    Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward

    1988-08-01

    A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.

  1. “They put you on your toes”: Physical Therapists' Perceived Benefits from and Barriers to Supervising Students in the Clinical Setting

    PubMed Central

    Hanna, Elizabeth; Cott, Cheryl

    2011-01-01

    ABSTRACT Purpose: To identify the perceived benefits of and barriers to clinical supervision of physical therapy (PT) students. Method: In this qualitative descriptive study, three focus groups and six key-informant interviews were conducted with clinical physical therapists or administrators working in acute care, orthopaedic rehabilitation, or complex continuing care. Data were coded and analyzed for common ideas using a constant comparison approach. Results: Perceived barriers to supervising students tended to be extrinsic: time and space constraints, challenging or difficult students, and decreased autonomy or flexibility for the clinical physical therapists. Benefits tended to be intrinsic: teaching provided personal gratification by promoting reflective practice and exposing clinical educators to current knowledge. The culture of different health care institutions was an important factor in therapists' perceptions of student supervision. Conclusions: Despite different disciplines and models of supervision, there is considerable synchronicity in the issues reported by physical therapists and other disciplines. Embedding the value of clinical teaching in the institution, along with strong communication links among academic partners, institutions, and potential clinical faculty, may mitigate barriers and increase the commitment and satisfaction of teaching staff. PMID:22379263

  2. Methodology of decreasing software complexity using ontology

    NASA Astrophysics Data System (ADS)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  3. GCKP84-general chemical kinetics code for gas-phase flow and batch processes including heat transfer effects

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Scullin, V. J.

    1984-01-01

    A general chemical kinetics code is described for complex, homogeneous ideal gas reactions in any chemical system. The main features of the GCKP84 code are flexibility, convenience, and speed of computation for many different reaction conditions. The code, which replaces the GCKP code published previously, solves numerically the differential equations for complex reaction in a batch system or one dimensional inviscid flow. It also solves numerically the nonlinear algebraic equations describing the well stirred reactor. A new state of the art numerical integration method is used for greatly increased speed in handling systems of stiff differential equations. The theory and the computer program, including details of input preparation and a guide to using the code are given.

  4. Integrated Predictive Tools for Customizing Microstructure and Material Properties of Additively Manufactured Aerospace Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Fattebert, Jean-Luc; Gorti, Sarma B.

    Additive Manufacturing (AM) refers to a process by which digital three-dimensional (3-D) design data is converted to build up a component by depositing material layer-by-layer. United Technologies Corporation (UTC) is currently involved in fabrication and certification of several AM aerospace structural components made from aerospace materials. This is accomplished by using optimized process parameters determined through numerous design-of-experiments (DOE)-based studies. Certification of these components is broadly recognized as a significant challenge, with long lead times, very expensive new product development cycles and very high energy consumption. Because of these challenges, United Technologies Research Center (UTRC), together with UTC business unitsmore » have been developing and validating an advanced physics-based process model. The specific goal is to develop a physics-based framework of an AM process and reliably predict fatigue properties of built-up structures as based on detailed solidification microstructures. Microstructures are predicted using process control parameters including energy source power, scan velocity, deposition pattern, and powder properties. The multi-scale multi-physics model requires solution and coupling of governing physics that will allow prediction of the thermal field and enable solution at the microstructural scale. The state-of-the-art approach to solve these problems requires a huge computational framework and this kind of resource is only available within academia and national laboratories. The project utilized the parallel phase-fields codes at Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL), along with the high-performance computing (HPC) capabilities existing at the two labs to demonstrate the simulation of multiple dendrite growth in threedimensions (3-D). The LLNL code AMPE was used to implement the UTRC phase field model that was previously developed for a model binary alloy, and the simulation results were compared against the UTRC simulation results, followed by extension of the UTRC model to simulate multiple dendrite growth in 3-D. The ORNL MEUMAPPS code was used to simulate dendritic growth in a model ternary alloy with the same equilibrium solidification range as the Ni-base alloy 718 using realistic model parameters, including thermodynamic integration with a Calphad based model for the ternary alloy. Implementation of the UTRC model in AMPE met with several numerical and parametric issues that were resolved and good comparison between the simulation results obtained by the two codes was demonstrated for two dimensional (2-D) dendrites. 3-D dendrite growth was then demonstrated with the AMPE code using nondimensional parameters obtained in 2-D simulations. Multiple dendrite growth in 2-D and 3-D were demonstrated using ORNL’s MEUMAPPS code using simple thermal boundary conditions. MEUMAPPS was then modified to incorporate the complex, time-dependent thermal boundary conditions obtained by UTRC’s thermal modeling of single track AM experiments to drive the phase field simulations. The results were in good agreement with UTRC’s experimental measurements.« less

  5. Study of no-man's land physics in the total-f gyrokinetic code XGC1

    NASA Astrophysics Data System (ADS)

    Ku, Seung Hoe; Chang, C. S.; Lang, J.

    2014-10-01

    While the ``transport shortfall'' in the ``no-man's land'' has been observed often in delta-f codes, it has not yet been observed in the global total-f gyrokinetic particle code XGC1. Since understanding the interaction between the edge and core transport appears to be a critical element in the prediction for ITER performance, understanding the no-man's land issue is an important physics research topic. Simulation results using the Holland case will be presented and the physics causing the shortfall phenomenon will be discussed. Nonlinear nonlocal interaction of turbulence, secondary flows, and transport appears to be the key.

  6. The Oceanographic Multipurpose Software Environment (OMUSE v1.0)

    NASA Astrophysics Data System (ADS)

    Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk

    2017-08-01

    In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.

  7. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less

  8. Traumatic eye injuries as a result of blunt impact: computational issues

    NASA Astrophysics Data System (ADS)

    Clemente, C.; Esposito, L.; Bonora, N.; Limido, J.; Lacome, J. L.; Rossi, T.

    2014-05-01

    The detachment or tearing of the retina in the human eye as a result of a collision is a phenomenon that occurs very often. Reliable numerical simulations of eye impact can be very useful tools to understand the physical mechanisms responsible for traumatic eye injuries accompanying blunt impact. The complexity and variability of the physical and mechanical properties of the biological materials, the lack of agreement on their related experimental data as well as the unsuitability of specific numerical codes and models are only some of the difficulties when dealing with this matter. All these challenging issues must be solved to obtain accurate numerical analyses involving dynamic behavior of biological soft tissues. To this purpose, a numerical and experimental investigation of the dynamic response of the eye during an impact event was performed. Numerical simulations were performed with IMPETUS-AFEA, a new general non-linear finite element (FE) software which offers non uniform rational B-splines (NURBS) FE technology for the simulation of large deformation and fracture in materials. IMPETUS code was selected in order to solve hourglass and locking problems typical of nearly incompressible materials like eye tissues. Computational results were compared with the experimental results on fresh enucleated porcine eyes impacted with airsoft pellets.

  9. Coupling of Noah-MP and the High Resolution CI-WATER ADHydro Hydrological Model

    NASA Astrophysics Data System (ADS)

    Moreno, H. A.; Goncalves Pureza, L.; Ogden, F. L.; Steinke, R. C.

    2014-12-01

    ADHydro is a physics-based, high-resolution, distributed hydrological model suitable for simulating large watersheds in a massively parallel computing environment. It simulates important processes such as: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow and water management. For the vegetation and evapotranspiration processes, ADHydro uses the validated community land surface model (LSM) Noah-MP. Noah-MP uses multiple options for key land-surface hydrology and was developed to facilitate climate predictions with physically based ensembles. This presentation discusses the lessons learned in coupling Noah-MP to ADHydro. Noah-MP is delivered with a main driver program and not as a library with a clear interface to be called from other codes. This required some investigation to determine the correct functions to call and the appropriate parameter values. ADHydro runs Noah-MP as a point process on each mesh element and provides initialization and forcing data for each element. Modeling data are acquired from various sources including the Soil Survey Geographic Database (SSURGO), the Weather Research and Forecasting (WRF) model, and internal ADHydro simulation states. Despite these challenges in coupling Noah-MP to ADHydro, the use of Noah-MP provides the benefits of a supported community code.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    BRISC is a developmental prototype for a nextgeneration “systems-level” integrated performance and safety code (IPSC) for nuclear reactors. Its development served to demonstrate how a lightweight multi-physics coupling approach can be used to tightly couple the physics models in several different physics codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled “burner” nuclear reactor. For example, the RIO Fluid Flow and Heat transfer code developed at Sandia (SNL: Chris Moen, Dept. 08005) is used in BRISC to model fluid flow and heat transfer, as well as conduction heat transfermore » in solids. Because BRISC is a prototype, its most practical application is as a foundation or starting point for developing a true production code. The sub-codes and the associated models and correlations currently employed within BRISC were chosen to cover the required application space and demonstrate feasibility, but were not optimized or validated against experimental data within the context of their use in BRISC.« less

  11. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  12. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  13. Solution of nonlinear flow equations for complex aerodynamic shapes

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed

    1992-01-01

    Solution-adaptive CFD codes based on unstructured methods for 3-D complex geometries in subsonic to supersonic regimes were investigated, and the computed solution data were analyzed in conjunction with experimental data obtained from wind tunnel measurements in order to assess and validate the predictability of the code. Specifically, the FELISA code was assessed and improved in cooperation with NASA Langley and Imperial College, Swansea, U.K.

  14. Convolution Operations on Coding Metasurface to Reach Flexible and Continuous Controls of Terahertz Beams.

    PubMed

    Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang

    2016-10-01

    The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.

  15. Initial development of 5D COGENT

    NASA Astrophysics Data System (ADS)

    Cohen, R. H.; Lee, W.; Dorf, M.; Dorr, M.

    2015-11-01

    COGENT is a continuum gyrokinetic edge code being developed by the by the Edge Simulation Laboratory (ESL) collaboration. Work to date has been primarily focussed on a 4D (axisymmetric) version that models transport properties of edge plasmas. We have begun development of an initial 5D version to study edge turbulence, with initial focus on kinetic effects on blob dynamics and drift-wave instability in a shearless magnetic field. We are employing compiler directives and preprocessor macros to create a single source code that can be compiled in 4D or 5D, which helps to ensure consistency of physics representation between the two versions. A key aspect of COGENT is the employment of mapped multi-block grid capability to handle the complexity of diverter geometry. It is planned to eventually exploit this capability to handle magnetic shear, through a series of successively skewed unsheared grid blocks. The initial version has an unsheared grid and will be used to explore the degree to which a radial domain must be block decomposed. We report on the status of code development and initial tests. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344.

  16. Report on Automated Semantic Analysis of Scientific and Engineering Codes

    NASA Technical Reports Server (NTRS)

    Stewart. Maark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    The loss of the Mars Climate Orbiter due to a software error reveals what insiders know: software development is difficult and risky because, in part, current practices do not readily handle the complex details of software. Yet, for scientific software development the MCO mishap represents the tip of the iceberg; few errors are so public, and many errors are avoided with a combination of expertise, care, and testing during development and modification. Further, this effort consumes valuable time and resources even when hardware costs and execution time continually decrease. Software development could use better tools! This lack of tools has motivated the semantic analysis work explained in this report. However, this work has a distinguishing emphasis; the tool focuses on automated recognition of the fundamental mathematical and physical meaning of scientific code. Further, its comprehension is measured by quantitatively evaluating overall recognition with practical codes. This emphasis is necessary if software errors-like the MCO error-are to be quickly and inexpensively avoided in the future. This report evaluates the progress made with this problem. It presents recommendations, describes the approach, the tool's status, the challenges, related research, and a development strategy.

  17. Interactive, process-oriented climate modeling with CLIMLAB

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2016-12-01

    Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The Jupyter Notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields.

  18. Scheduling observational and physical practice: influence on the coding of simple motor sequences.

    PubMed

    Ellenbuerger, Thomas; Boutin, Arnaud; Blandin, Yannick; Shea, Charles H; Panzer, Stefan

    2012-01-01

    The main purpose of the present experiment was to determine the coordinate system used in the development of movement codes when observational and physical practice are scheduled across practice sessions. The task was to reproduce a 1,300-ms spatial-temporal pattern of elbow flexions and extensions. An intermanual transfer paradigm with a retention test and two effector (contralateral limb) transfer tests was used. The mirror effector transfer test required the same pattern of homologous muscle activation and sequence of limb joint angles as that performed or observed during practice, and the non-mirror effector transfer test required the same spatial pattern movements as that performed or observed. The test results following the first acquisition session replicated the findings of Gruetzmacher, Panzer, Blandin, and Shea (2011) . The results following the second acquisition session indicated a strong advantage for participants who received physical practice in both practice sessions or received observational practice followed by physical practice. This advantage was found on both the retention and the mirror transfer tests compared to the non-mirror transfer test. These results demonstrate that codes based in motor coordinates can be developed relatively quickly and effectively for a simple spatial-temporal movement sequence when participants are provided with physical practice or observation followed by physical practice, but physical practice followed by observational practice or observational practice alone limits the development of codes based in motor coordinates.

  19. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)

  20. [Hand surgery in the German DRG System 2007].

    PubMed

    Franz, D; Windolf, J; Kaufmann, M; Siebert, C H; Roeder, N

    2007-05-01

    Hand surgery often needs only a short length of stay in hospital. Patients' comorbidity is low. Many hand surgery procedures do not need inpatient structures. Up until 2006 special procedures of hand surgery could not be coded. The DRG structure did not separate very complex and less complex operations. Specialized hospitals needed a proper case allocation of their patients within the G-DRG system. The DRG structure concerning hand surgery increased in version 2007 of the G-DRG system. The main parameter of DRG splitting is the complexity of the operation. Furthermore additional criteria such as more than one significant OR procedure, the patients' age, or special diagnoses influence case allocation. A special OPS code for complex cases treated with hand surgery was implemented. The changes in the DRG structure and the implementation of the new OPS code for complex cases establish a strong basis for the identification of different patient costs. Different case allocation leads to different economic impacts on departments of hand surgery. Whether the new OPS code becomes a DRG splitting parameter has to be calculated by the German DRG Institute for further DRG versions.

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  2. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  3. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  4. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  5. Underworld results as a triple (shopping list, posterior, priors)

    NASA Astrophysics Data System (ADS)

    Quenette, S. M.; Moresi, L. N.; Abramson, D.

    2013-12-01

    When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.

  6. Fundamental Studies in the Molecular Basis of Laser Induced Retinal Damage

    DTIC Science & Technology

    1988-01-01

    Cornell University School of Applied & Engineering Physics Ithaca, NY 14853 DOD DISTRIBUTION STATEMENT Approved for public release; distribution unlimited...Code) 7b. ADDRESS (City, State, and ZIP Code) School of Applied & Engineering Physics Ithaca, NY 14853 8a. NAME OF FUNDING/SPONSORING Bb. OFFICE SYMBOL

  7. Fundamental Studies in the Molecular Basis of Laser Induced Retinal Damage

    DTIC Science & Technology

    1988-01-01

    Cornell University .LECT l School of Applied & Engineering PhysicsIthaca, NY 14853 0 JAN 198D DOD DISTRIBUTION STATEMENT Approved for public release...State, and ZIP Code) 7b. ADDRESS (City, State, and ZIP Code) School of Applied & Engineering Physics Ithaca, NY 14853 Ba. NAME OF FUNDING/ SPONSORING

  8. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH...

  9. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  10. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  11. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  12. Video streaming with SHVC to HEVC transcoding

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; Xiu, Xiaoyu

    2015-09-01

    This paper proposes an efficient Scalable High efficiency Video Coding (SHVC) to High Efficiency Video Coding (HEVC) transcoder, which can reduce the transcoding complexity significantly, and provide a desired trade-off between the transcoding complexity and the transcoded video quality. To reduce the transcoding complexity, some of coding information, such as coding unit (CU) depth, prediction mode, merge mode, motion vector information, intra direction information and transform unit (TU) depth information, in the SHVC bitstream are mapped and transcoded to single layer HEVC bitstream. One major difficulty in transcoding arises when trying to reuse the motion information from SHVC bitstream since motion vectors referring to inter-layer reference (ILR) pictures cannot be reused directly in transcoding. Reusing motion information obtained from ILR pictures for those prediction units (PUs) will reduce the complexity of the SHVC transcoder greatly but a significant reduction in the quality of the picture is observed. Pictures corresponding to the intra refresh pictures in the base layer (BL) will be coded as P pictures in enhancement layer (EL) in the SHVC bitstream; and directly reusing the intra information from the BL for transcoding will not get a good coding efficiency. To solve these problems, various transcoding technologies are proposed. The proposed technologies offer different trade-offs between transcoding speed and transcoding quality. They are implemented on the basis of reference software SHM-6.0 and HM-14.0 for the two layer spatial scalability configuration. Simulations show that the proposed SHVC software transcoder reduces the transcoding complexity by up to 98-99% using low complexity transcoding mode when compared with cascaded re-encoding method. The transcoder performance at various bitrates with different transcoding modes are compared in terms of transcoding speed and transcoded video quality.

  13. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  14. A Statistician's View of Upcoming Grand Challenges

    NASA Astrophysics Data System (ADS)

    Meng, Xiao Li

    2010-01-01

    In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.

  15. Preliminary SAGE Simulations of Volcanic Jets Into a Stratified Atmosphere

    NASA Astrophysics Data System (ADS)

    Peterson, A. H.; Wohletz, K. H.; Ogden, D. E.; Gisler, G. R.; Glatzmaier, G. A.

    2007-12-01

    The SAGE (SAIC Adaptive Grid Eulerian) code employs adaptive mesh refinement in solving Eulerian equations of complex fluid flow desirable for simulation of volcanic eruptions. The goal of modeling volcanic eruptions is to better develop a code's predictive capabilities in order to understand the dynamics that govern the overall behavior of real eruption columns. To achieve this goal, we focus on the dynamics of underexpended jets, one of the fundamental physical processes important to explosive eruptions. Previous simulations of laboratory jets modeled in cylindrical coordinates were benchmarked with simulations in CFDLib (Los Alamos National Laboratory), which solves the full Navier-Stokes equations (includes viscous stress tensor), and showed close agreement, indicating that adaptive mesh refinement used in SAGE may offset the need for explicit calculation of viscous dissipation.We compare gas density contours of these previous simulations with the same initial conditions in cylindrical and Cartesian geometries to laboratory experiments to determine both the validity of the model and the robustness of the code. The SAGE results in both geometries are within several percent of the experiments for position and density of the incident (intercepting) and reflected shocks, slip lines, shear layers, and Mach disk. To expand our study into a volcanic regime, we simulate large-scale jets in a stratified atmosphere to establish the code's ability to model a sustained jet into a stable atmosphere.

  16. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  17. A Systematic Approach for Obtaining Performance on Matrix-Like Operations

    NASA Astrophysics Data System (ADS)

    Veras, Richard Michael

    Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.

  18. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  19. Genomic analysis of organismal complexity in the multicellular green alga Volvox carteri

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prochnik, Simon E.; Umen, James; Nedelcu, Aurora

    2010-07-01

    Analysis of the Volvox carteri genome reveals that this green alga's increased organismal complexity and multicellularity are associated with modifications in protein families shared with its unicellular ancestor, and not with large-scale innovations in protein coding capacity. The multicellular green alga Volvox carteri and its morphologically diverse close relatives (the volvocine algae) are uniquely suited for investigating the evolution of multicellularity and development. We sequenced the 138 Mb genome of V. carteri and compared its {approx}14,500 predicted proteins to those of its unicellular relative, Chlamydomonas reinhardtii. Despite fundamental differences in organismal complexity and life history, the two species have similarmore » protein-coding potentials, and few species-specific protein-coding gene predictions. Interestingly, volvocine algal-specific proteins are enriched in Volvox, including those associated with an expanded and highly compartmentalized extracellular matrix. Our analysis shows that increases in organismal complexity can be associated with modifications of lineage-specific proteins rather than large-scale invention of protein-coding capacity.« less

  20. Model annotation for synthetic biology: automating model to nucleotide sequence conversion

    PubMed Central

    Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil

    2011-01-01

    Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753

  1. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  2. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  3. Navier-Stokes analysis of radial turbine rotor performance

    NASA Technical Reports Server (NTRS)

    Larosiliere, L. M.

    1993-01-01

    An analysis of flow through a radial turbine rotor using the three-dimensional, thin-layer Navier-Stokes code RVC3D is described. The rotor is a solid version of an air-cooled metallic radial turbine having thick trailing edges, shroud clearance, and scalloped-backface clearance. Results are presented at the nominal operating condition using both a zero-clearance model and a model simulating the effects of the shroud and scalloped-backface clearance flows. A comparison with the available test data is made and details of the internal flow physics are discussed, allowing a better understanding of the complex flow distribution within the rotor.

  4. PHOTOMETRIC ANALYSIS OF HS Aqr, EG Cep, VW LMi, AND DU Boo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djurasevic, G.; Latkovic, O.; Bastuerk, Oe.

    2013-03-15

    We analyze new multicolor light curves for four close late-type binaries: HS Aqr, EG Cep, VW LMi, and DU Boo, in order to determine the orbital and physical parameters of the systems and estimate the distances. The analysis is done using the modeling code of G. Djurasevic, and is based on up-to-date measurements of spectroscopic elements. All four systems have complex, asymmetric light curves that we model by including bright or dark spots on one or both components. Our findings indicate that HS Aqr and EG Cep are in semi-detached, while VW LMi and DU Boo are in overcontact configurations.

  5. Combustion research for gas turbine engines

    NASA Technical Reports Server (NTRS)

    Mularz, E. J.; Claus, R. W.

    1985-01-01

    Research on combustion is being conducted at Lewis Research Center to provide improved analytical models of the complex flow and chemical reaction processes which occur in the combustor of gas turbine engines and other aeropropulsion systems. The objective of the research is to obtain a better understanding of the various physical processes that occur in the gas turbine combustor in order to develop models and numerical codes which can accurately describe these processes. Activities include in-house research projects, university grants, and industry contracts and are classified under the subject areas of advanced numerics, fuel sprays, fluid mixing, and radiation-chemistry. Results are high-lighted from several projects.

  6. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  7. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  8. f1: a code to compute Appell's F1 hypergeometric function

    NASA Astrophysics Data System (ADS)

    Colavecchia, F. D.; Gasaneo, G.

    2004-02-01

    In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.

  9. Metadata Management on the SCEC PetaSHA Project: Helping Users Describe, Discover, Understand, and Use Simulation Data in a Large-Scale Scientific Collaboration

    NASA Astrophysics Data System (ADS)

    Okaya, D.; Deelman, E.; Maechling, P.; Wong-Barnum, M.; Jordan, T. H.; Meyers, D.

    2007-12-01

    Large scientific collaborations, such as the SCEC Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA) Project, involve interactions between many scientists who exchange ideas and research results. These groups must organize, manage, and make accessible their community materials of observational data, derivative (research) results, computational products, and community software. The integration of scientific workflows as a paradigm to solve complex computations provides advantages of efficiency, reliability, repeatability, choices, and ease of use. The underlying resource needed for a scientific workflow to function and create discoverable and exchangeable products is the construction, tracking, and preservation of metadata. In the scientific workflow environment there is a two-tier structure of metadata. Workflow-level metadata and provenance describe operational steps, identity of resources, execution status, and product locations and names. Domain-level metadata essentially define the scientific meaning of data, codes and products. To a large degree the metadata at these two levels are separate. However, between these two levels is a subset of metadata produced at one level but is needed by the other. This crossover metadata suggests that some commonality in metadata handling is needed. SCEC researchers are collaborating with computer scientists at SDSC, the USC Information Sciences Institute, and Carnegie Mellon Univ. in order to perform earthquake science using high-performance computational resources. A primary objective of the "PetaSHA" collaboration is to perform physics-based estimations of strong ground motion associated with real and hypothetical earthquakes located within Southern California. Construction of 3D earth models, earthquake representations, and numerical simulation of seismic waves are key components of these estimations. Scientific workflows are used to orchestrate the sequences of scientific tasks and to access distributed computational facilities such as the NSF TeraGrid. Different types of metadata are produced and captured within the scientific workflows. One workflow within PetaSHA ("Earthworks") performs a linear sequence of tasks with workflow and seismological metadata preserved. Downstream scientific codes ingest these metadata produced by upstream codes. The seismological metadata uses attribute-value pairing in plain text; an identified need is to use more advanced handling methods. Another workflow system within PetaSHA ("Cybershake") involves several complex workflows in order to perform statistical analysis of ground shaking due to thousands of hypothetical but plausible earthquakes. Metadata management has been challenging due to its construction around a number of legacy scientific codes. We describe difficulties arising in the scientific workflow due to the lack of this metadata and suggest corrective steps, which in some cases include the cultural shift of domain science programmers coding for metadata.

  10. Electrostatic plasma simulation by Particle-In-Cell method using ANACONDA package

    NASA Astrophysics Data System (ADS)

    Blandón, J. S.; Grisales, J. P.; Riascos, H.

    2017-06-01

    Electrostatic plasma is the most representative and basic case in plasma physics field. One of its main characteristics is its ideal behavior, since it is assumed be in thermal equilibrium state. Through this assumption, it is possible to study various complex phenomena such as plasma oscillations, waves, instabilities or damping. Likewise, computational simulation of this specific plasma is the first step to analyze physics mechanisms on plasmas, which are not at equilibrium state, and hence plasma is not ideal. Particle-In-Cell (PIC) method is widely used because of its precision for this kind of cases. This work, presents PIC method implementation to simulate electrostatic plasma by Python, using ANACONDA packages. The code has been corroborated comparing previous theoretical results for three specific phenomena in cold plasmas: oscillations, Two-Stream instability (TSI) and Landau Damping(LD). Finally, parameters and results are discussed.

  11. Bayesian component separation: The Planck experience

    NASA Astrophysics Data System (ADS)

    Wehus, Ingunn Kathrine; Eriksen, Hans Kristian

    2018-05-01

    Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.

  12. Lanthanide/Actinide Opacities

    NASA Astrophysics Data System (ADS)

    Hungerford, Aimee; Fontes, Christopher J.

    2018-06-01

    Gravitational wave observations benefit from accompanying electromagnetic signals in order to accurately determine the sky positions of the sources. The ejecta of neutron star mergers are expected to produce such electromagnetic transients, called macronovae (e.g. the recent and unprecedented observation of GW170817). Characteristics of the ejecta include large velocity gradients and the presence of heavy r-process elements, which pose significant challenges to the accurate calculation of radiative opacities and radiation transport. Opacities include a dense forest of bound-bound features arising from near-neutral lanthanide and actinide elements. Here we present an overview of current theoretical opacity determinations that are used by neutron star merger light curve modelers. We will touch on atomic physics and plasma modeling codes that are used to generate these opacities, as well as the limited body of laboratory experiments that may serve as points of validation for these complex atomic physics calculations.

  13. Assessment of chemistry models for compressible reacting flows

    NASA Astrophysics Data System (ADS)

    Lapointe, Simon; Blanquart, Guillaume

    2014-11-01

    Recent technological advances in propulsion and power devices and renewed interest in the development of next generation supersonic and hypersonic vehicles have increased the need for detailed understanding of turbulence-combustion interactions in compressible reacting flows. In numerical simulations of such flows, accurate modeling of the fuel chemistry is a critical component of capturing the relevant physics. Various chemical models are currently being used in reacting flow simulations. However, the differences between these models and their impacts on the fluid dynamics in the context of compressible flows are not well understood. In the present work, a numerical code is developed to solve the fully coupled compressible conservation equations for reacting flows. The finite volume code is based on the theoretical and numerical framework developed by Oefelein (Prog. Aero. Sci. 42 (2006) 2-37) and employs an all-Mach-number formulation with dual time-stepping and preconditioning. The numerical approach is tested on turbulent premixed flames at high Karlovitz numbers. Different chemical models of varying complexity and computational cost are used and their effects are compared.

  14. Modeling Solar Wind Flow with the Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Pogorelov, N.V.; Borovikov, S. N.; Bedford, M. C.; ...

    2013-04-01

    Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. We have enhanced the code with additional physical treatments for the transport of turbulence and acceleration of pickup ions in the interplanetary space and at the termination shock. In this article, we present themore » results of our numerical simulation of the solar wind (SW) interaction with the local interstellar medium (LISM) in different time-dependent and stationary formulations. Numerical results are compared with the Ulysses, Voyager, and OMNI observations. Finally, the SW boundary conditions are derived from in-situ spacecraft measurements and remote observations.« less

  15. Physical-layer network coding in coherent optical OFDM systems.

    PubMed

    Guan, Xun; Chan, Chun-Kit

    2015-04-20

    We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node.

  16. Complex Organic Parents during Star-Forming Infall

    NASA Astrophysics Data System (ADS)

    Drozdovskaya, Maria; Walsh, Catherine; Visser, Ruud; Harsono, Daniel; van Dishoeck, Ewine

    2013-07-01

    Stars are born upon the gravitation infall of clumps in molecular clouds. Complex organic compounds have been observed to accompany star formation and are also believed to be the simplest ingredients to life. Therefore understanding complex organics under star forming conditions is fundamentally interesting. This work models the formation and distribution of several potential parent species for complex organic compounds, such as formaldehyde (H2CO) and methanol (CH3OH), along trajectories of matter parcels, as they undergo infall from the cold outer envelope towards the hot core region and eventually onto the disk. The code from Visser et al. (2009, 2011) serves as the basis for this research. The gas-phase chemistry network has now been expanded with grain-surface reactions to form CH3OH and, ultimately, larger organics such as methyl formate (HCOOCH3) and dimethyl ether (CH3OCH3). The intention behind this work is to obtain information on complex organic parents in the star formation scenario by means of a physically and chemically robust model. The availability of complex organic compounds will vary depending on where the parent species are abundant, such as in the pre-stellar stage, hot-core, or only in the disk; and where they are available for a sufficient amount of time for the complexity enhancement. Such model-based conclusions can then be used in order to explain the observational data on complex organic compounds.

  17. Deep Drawing Simulations With Different Polycrystalline Models

    NASA Astrophysics Data System (ADS)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  18. Proposed Reference Spectral Irradiance Standards to Improve Photovoltaic Concentrating System Design and Performance Evaluation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, D. R.; Emery, K. E.; Gueymard, C.

    2002-05-01

    This conference paper describes the American Society for Testing and Materials (ASTM), the International Electrotechnical Commission (IEC), and the International Standards Organization (ISO) standard solar terrestrial spectra (ASTM G-159, IEC-904-3, ISO 9845-1) provide standard spectra for photovoltaic performance applications. Modern terrestrial spectral radiation models and knowledge of atmospheric physics are applied to develop suggested revisions to update the reference spectra. We use a moderately complex radiative transfer model (SMARTS2) to produce the revised spectra. SMARTS2 has been validated against the complex MODTRAN radiative transfer code and spectral measurements. The model is proposed as an adjunct standard to reproduce the referencemore » spectra. The proposed spectra represent typical clear sky spectral conditions associated with sites representing reasonable photovoltaic energy production and weathering and durability climates. The proposed spectra are under consideration by ASTM.« less

  19. Trauma complexity and child abuse: A qualitative study of attachment narratives in adult refugees with PTSD.

    PubMed

    Riber, Karin

    2017-01-01

    The present study aimed to identify trauma types over the life course among adult refugees and to explore their accounts of childhood maltreatment. A sample of 43 Arabic-speaking refugees with posttraumatic stress disorder (PTSD) attending a treatment context in Denmark were interviewed. Using a "Trauma Coding Manual" developed for this study, trauma types were identified in interview transcripts. In both men and women with Iraqi and Palestinian-Lebanese backgrounds, high levels of trauma complexity and high rates of childhood maltreatment were found (63%, n = 27). A number of concepts and categories emerged in the domains childhood physical abuse (CPA), childhood emotional abuse (CEA), and neglect. Participants articulated wide personal impacts of child abuse in emotional, relational, and behavioral domains in their adult lives. These narratives contribute valuable clinical information for refugee trauma treatment providers.

  20. Professional Ethics in Teaching: Towards the Development of a Code of Practice.

    ERIC Educational Resources Information Center

    Campbell, Elizabeth

    2000-01-01

    Provides a theoretical discussion about the process of creating a professional code of ethics for educators. Discusses six key issues and questions, introducing the development of a code of professional ethics and the complexities the code should address. Includes references. (CMK)

  1. DYNECHARM++: a toolkit to simulate coherent interactions of high-energy charged particles in complex structures

    NASA Astrophysics Data System (ADS)

    Bagli, Enrico; Guidi, Vincenzo

    2013-08-01

    A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.

  2. Exploiting the cannibalistic traits of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Collins, O.

    1993-01-01

    In Reed-Solomon codes and all other maximum distance separable codes, there is an intrinsic relationship between the size of the symbols in a codeword and the length of the codeword. Increasing the number of symbols in a codeword to improve the efficiency of the coding system thus requires using a larger set of symbols. However, long Reed-Solomon codes are difficult to implement and many communications or storage systems cannot easily accommodate an increased symbol size, e.g., M-ary frequency shift keying (FSK) and photon-counting pulse-position modulation demand a fixed symbol size. A technique for sharing redundancy among many different Reed-Solomon codewords to achieve the efficiency attainable in long Reed-Solomon codes without increasing the symbol size is described. Techniques both for calculating the performance of these new codes and for determining their encoder and decoder complexities is presented. These complexities are usually found to be substantially lower than conventional Reed-Solomon codes of similar performance.

  3. Processing module operating methods, processing modules, and communications systems

    DOEpatents

    McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy

    2014-09-09

    A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.

  4. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  5. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGrail, B.P.; Mahoney, L.A.

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less

  6. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  7. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  8. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  9. The strategic management of organizational knowledge exchange related to hospital quality measurement and reporting.

    PubMed

    Rangachari, Pavani

    2008-01-01

    CONTEXT/PURPOSE: With the growing momentum toward hospital quality measurement and reporting by public and private health care payers, hospitals face increasing pressures to improve their medical record documentation and administrative data coding accuracy. This study explores the relationship between the organizational knowledge-sharing structure related to quality and hospital coding accuracy for quality measurement. Simultaneously, this study seeks to identify other leadership/management characteristics associated with coding for quality measurement. Drawing upon complexity theory, the literature on "professional complex systems" has put forth various strategies for managing change and turnaround in professional organizations. In so doing, it has emphasized the importance of knowledge creation and organizational learning through interdisciplinary networks. This study integrates complexity, network structure, and "subgoals" theories to develop a framework for knowledge-sharing network effectiveness in professional complex systems. This framework is used to design an exploratory and comparative research study. The sample consists of 4 hospitals, 2 showing "good coding" accuracy for quality measurement and 2 showing "poor coding" accuracy. Interviews and surveys are conducted with administrators and staff in the quality, medical staff, and coding subgroups in each facility. Findings of this study indicate that good coding performance is systematically associated with a knowledge-sharing network structure rich in brokerage and hierarchy (with leaders connecting different professional subgroups to each other and to the external environment), rather than in density (where everyone is directly connected to everyone else). It also implies that for the hospital organization to adapt to the changing environment of quality transparency, senior leaders must undertake proactive and unceasing efforts to coordinate knowledge exchange across physician and coding subgroups and connect these subgroups with the changing external environment.

  10. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  11. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  12. Development of a new lattice physics code robin for PWR application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less

  13. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  14. Object Based Numerical Zooming Between the NPSS Version 1 and a 1-Dimensional Meanline High Pressure Compressor Design Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, G.; Naiman, C.; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.

  15. Light element opacities of astrophysical interest from ATOMIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colgan, J.; Kilcrease, D. P.; Magee, N. H. Jr.

    We present new calculations of local-thermodynamic-equilibrium (LTE) light element opacities from the Los Alamos ATOMIC code for systems of astrophysical interest. ATOMIC is a multi-purpose code that can generate LTE or non-LTE quantities of interest at various levels of approximation. Our calculations, which include fine-structure detail, represent a systematic improvement over previous Los Alamos opacity calculations using the LEDCOP legacy code. The ATOMIC code uses ab-initio atomic structure data computed from the CATS code, which is based on Cowan's atomic structure codes, and photoionization cross section data computed from the Los Alamos ionization code GIPPER. ATOMIC also incorporates a newmore » equation-of-state (EOS) model based on the chemical picture. ATOMIC incorporates some physics packages from LEDCOP and also includes additional physical processes, such as improved free-free cross sections and additional scattering mechanisms. Our new calculations are made for elements of astrophysical interest and for a wide range of temperatures and densities.« less

  16. WDEC: A Code for Modeling White Dwarf Structure and Pulsations

    NASA Astrophysics Data System (ADS)

    Bischoff-Kim, Agnès; Montgomery, Michael H.

    2018-05-01

    The White Dwarf Evolution Code (WDEC), written in Fortran, makes models of white dwarf stars. It is fast, versatile, and includes the latest physics. The code evolves hot (∼100,000 K) input models down to a chosen effective temperature by relaxing the models to be solutions of the equations of stellar structure. The code can also be used to obtain g-mode oscillation modes for the models. WDEC has a long history going back to the late 1960s. Over the years, it has been updated and re-packaged for modern computer architectures and has specifically been used in computationally intensive asteroseismic fitting. Generations of white dwarf astronomers and dozens of publications have made use of the WDEC, although the last true instrument paper is the original one, published in 1975. This paper discusses the history of the code, necessary to understand why it works the way it does, details the physics and features in the code today, and points the reader to where to find the code and a user guide.

  17. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  18. Model for intensity calculation in electron guns

    NASA Astrophysics Data System (ADS)

    Doyen, O.; De Conto, J. M.; Garnier, J. P.; Lefort, M.; Richard, N.

    2007-04-01

    The calculation of the current in an electron gun structure is one of the main investigations involved in the electron gun physics understanding. In particular, various simulation codes exist but often present some important discrepancies with experiments. Moreover, those differences cannot be reduced because of the lack of physical information in these codes. We present a simple physical three-dimensional model, valid for all kinds of gun geometries. This model presents a better precision than all the other simulation codes and models encountered and allows the real understanding of the electron gun physics. It is based only on the calculation of the Laplace electric field at the cathode, the use of the classical Child-Langmuir's current density, and a geometrical correction to this law. Finally, the intensity versus voltage characteristic curve can be precisely described with only a few physical parameters. Indeed, we have showed that only the shape of the electric field at the cathode without beam, and a distance of an equivalent infinite planar diode gap, govern mainly the electron gun current generation.

  19. Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations

    NASA Astrophysics Data System (ADS)

    Husa, Sascha; Hinder, Ian; Lechner, Christiane

    2006-06-01

    We present a suite of Mathematica-based computer-algebra packages, termed "Kranc", which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a "rapid prototyping" system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations. Program summaryTitle of program: Kranc Catalogue identifier: ADXS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under Linux Programming language used: Mathematica, C, Fortran 90 Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KB Has the code been vectorized or parallelized: The code is parallelized based on the Cactus framework. Number of bytes in distributed program, including test data, etc.: 1 578 142 Number of lines in distributed program, including test data, etc.: 11 711 Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A.; Barnard, J.J.; Briggs, R.J.

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaborationof LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity"tilt" to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of Warm Dense Matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven Inertial Fusion Energy (IFE). Thesemore » goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned Advanced Test Accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates a ~;;30 nC pulse of Li+ ions to ~;;3 MeV, then compresses it to ~;;1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using analysis, an interactive one-dimensional kinetic simulation model, and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.« less

  1. Simulation of profile evolution from ramp-up to ramp-down and optimization of tokamak plasma termination with the RAPTOR code

    NASA Astrophysics Data System (ADS)

    Teplukhina, A. A.; Sauter, O.; Felici, F.; Merle, A.; Kim, D.; the TCV Team; the ASDEX Upgrade Team; the EUROfusion MST1 Team

    2017-12-01

    The present work demonstrates the capabilities of the transport code RAPTOR as a fast and reliable simulator of plasma profiles for the entire plasma discharge, i.e. from ramp-up to ramp-down. This code focuses, at this stage, on the simulation of electron temperature and poloidal flux profiles using prescribed equilibrium and some kinetic profiles. In this work we extend the RAPTOR transport model to include a time-varying plasma equilibrium geometry and verify the changes via comparison with ATSRA code simulations. In addition a new ad hoc transport model based on constant gradients and suitable for simulations of L-H and H-L mode transitions has been incorporated into the RAPTOR code and validated with rapid simulations of the time evolution of the safety factor and the electron temperature over the entire AUG and TCV discharges. An optimization procedure for the plasma termination phase has also been developed during this work. We define the goal of the optimization as ramping down the plasma current as fast as possible while avoiding any disruptions caused by reaching physical or technical limits. Our numerical study of this problem shows that a fast decrease of plasma elongation during current ramp-down can help in reducing plasma internal inductance. An early transition from H- to L-mode allows us to reduce the drop in poloidal beta, which is also important for plasma MHD stability and control. This work shows how these complex nonlinear interactions can be optimized automatically using relevant cost functions and constraints. Preliminary experimental results for TCV are demonstrated.

  2. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  3. Long Non-Coding RNAs (lncRNAs) of Sea Cucumber: Large-Scale Prediction, Expression Profiling, Non-Coding Network Construction, and lncRNA-microRNA-Gene Interaction Analysis of lncRNAs in Apostichopus japonicus and Holothuria glaberrima During LPS Challenge and Radial Organ Complex Regeneration.

    PubMed

    Mu, Chuang; Wang, Ruijia; Li, Tianqi; Li, Yuqiang; Tian, Meilin; Jiao, Wenqian; Huang, Xiaoting; Zhang, Lingling; Hu, Xiaoli; Wang, Shi; Bao, Zhenmin

    2016-08-01

    Long non-coding RNA (lncRNA) structurally resembles mRNA but cannot be translated into protein. Although the systematic identification and characterization of lncRNAs have been increasingly reported in model species, information concerning non-model species is still lacking. Here, we report the first systematic identification and characterization of lncRNAs in two sea cucumber species: (1) Apostichopus japonicus during lipopolysaccharide (LPS) challenge and in heathy tissues and (2) Holothuria glaberrima during radial organ complex regeneration, using RNA-seq datasets and bioinformatics analysis. We identified A. japonicus and H. glaberrima lncRNAs that were differentially expressed during LPS challenge and radial organ complex regeneration, respectively. Notably, the predicted lncRNA-microRNA-gene trinities revealed that, in addition to targeting protein-coding transcripts, miRNAs might also target lncRNAs, thereby participating in a potential novel layer of regulatory interactions among non-coding RNA classes in echinoderms. Furthermore, the constructed coding-non-coding network implied the potential involvement of lncRNA-gene interactions during the regulation of several important genes (e.g., Toll-like receptor 1 [TLR1] and transglutaminase-1 [TGM1]) in response to LPS challenge and radial organ complex regeneration in sea cucumbers. Overall, this pioneer systematic identification, annotation, and characterization of lncRNAs in echinoderm pave the way for similar studies and future genetic, genomic, and evolutionary research in non-model species.

  4. How patients and clinicians make meaning of physical suffering in mental health evaluations.

    PubMed

    Carson, Nicholas J; Katz, Arlene M; Alegría, Margarita

    2016-10-01

    Clinicians in community mental health settings frequently evaluate individuals suffering from physical health problems. How patients make meaning of such "comorbidity" can affect mental health in ways that may be influenced by cultural expectations and by the responses of clinicians, with implications for delivering culturally sensitive care. A sample of 30 adult mental health intakes exemplifying physical illness assessment was identified from a larger study of patient-provider communication. The recordings of patient-provider interactions were coded using an information checklist containing 21 physical illness items. Intakes were analyzed for themes of meaning making by patients and responses by clinicians. Post-diagnostic interviews with these patients and clinicians were analyzed in similar fashion. Clinicians facilitated disclosures of physical suffering to varying degrees and formulated them in the context of the culture of mental health services. Patients discussed their perceptions of what was at stake in their experience of physical illness: existential loss, embodiment, and limits on the capacity to work and on their sense of agency. The experiences of physical illness, mental health difficulties, and social stressors were described as mutually reinforcing. In mental health intakes, patients attributed meaning to the negative effects of physical health problems in relation to mental health functioning and social stressors. Decreased capacity to work was a particularly salient concern. The complexity of these patient-provider interactions may best be captured by a sociosomatic formulation that addresses the meaning of physical and mental illness in relation to social stressors. © The Author(s) 2016.

  5. Development of numerical methods for overset grids with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1995-01-01

    Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.

  6. A new theory of development: the generation of complexity in ontogenesis.

    PubMed

    Barbieri, Marcello

    2016-03-13

    Today there is a very wide consensus on the idea that embryonic development is the result of a genetic programme and of epigenetic processes. Many models have been proposed in this theoretical framework to account for the various aspects of development, and virtually all of them have one thing in common: they do not acknowledge the presence of organic codes (codes between organic molecules) in ontogenesis. Here it is argued instead that embryonic development is a convergent increase in complexity that necessarily requires organic codes and organic memories, and a few examples of such codes are described. This is the code theory of development, a theory that was originally inspired by an algorithm that is capable of reconstructing structures from incomplete information, an algorithm that here is briefly summarized because it makes it intuitively appealing how a convergent increase in complexity can be achieved. The main thesis of the new theory is that the presence of organic codes in ontogenesis is not only a theoretical necessity but, first and foremost, an idea that can be tested and that has already been found to be in agreement with the evidence. © 2016 The Author(s).

  7. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs.

  8. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  9. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  10. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.

  11. Patient Self-Defined Goals: Essentials of Person-Centered Care for Serious Illness.

    PubMed

    Schellinger, Sandra Ellen; Anderson, Eric Worden; Frazer, Monica Schmitz; Cain, Cindy Lynn

    2018-01-01

    This research, a descriptive qualitative analysis of self-defined serious illness goals, expands the knowledge of what goals are important beyond the physical-making existing disease-specific guidelines more holistic. Integration of goals of care discussions and documentation is standard for quality palliative care but not consistently executed into general and specialty practice. Over 14 months, lay health-care workers (care guides) provided monthly supportive visits for 160 patients with advanced heart failure, cancer, and dementia expected to die in 2 to 3 years. Care guides explored what was most important to patients and documented their self-defined goals on a medical record flow sheet. Using definitions of an expanded set of whole-person domains adapted from the National Consensus Project (NCP) Clinical Practice Guidelines for Quality Palliative Care, 999 goals and their associated plans were deductively coded and examined. Four themes were identified-medical, nonmedical, multiple, and global. Forty percent of goals were coded into the medical domain; 40% were coded to nonmedical domains-social (9%), ethical (7%), family (6%), financial/legal (5%), psychological (5%), housing (3%), legacy/bereavement (3%), spiritual (1%), and end-of-life care (1%). Sixteen percent of the goals were complex and reflected a mix of medical and nonmedical domains, "multiple" goals. The remaining goals (4%) were too global to attribute to an NCP domain. Self-defined serious illness goals express experiences beyond physical health and extend into all aspects of whole person. It is feasible to elicit and record serious illness goals. This approach to goals can support meaningful person-centered care, decision-making, and planning that accords with individual preferences of late life.

  12. DarkBit: a GAMBIT module for computing dark matter observables and likelihoods

    NASA Astrophysics Data System (ADS)

    Bringmann, Torsten; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Kahlhoefer, Felix; Kvellestad, Anders; Putze, Antje; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Wild, Sebastian

    2017-12-01

    We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments ( gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments ( DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool ( GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes ( DarkSUSY and micrOMEGAs), and application of DarkBit 's advanced direct and indirect detection routines to a simple effective dark matter model.

  13. Comparative investigation of N donor ligand-lanthanide complexes from the metal and ligand point of view

    NASA Astrophysics Data System (ADS)

    Prüßmann, T.; Denecke, M. A.; Geist, A.; Rothe, J.; Lindqvist-Reis, P.; Löble, M.; Breher, F.; Batchelor, D. R.; Apostolidis, C.; Walter, O.; Caliebe, W.; Kvashnina, K.; Jorissen, K.; Kas, J. J.; Rehr, J. J.; Vitova, T.

    2013-04-01

    N-donor ligands such as n-Pr-BTP (2,6-bis(5,6-dipropyl-1,2,4-triazin-3-yl)pyridine) studied here preferentially bind An(III) over Ln(III) in liquid-liquid separation of trivalent ac-tinides from spent nuclear fuel. The chemical and physical processes responsible for this selectivity are not yet well understood. We present systematic comparative near-edge X-ray absorption structure (XANES) spectroscopy investigations at the Gd L3 edge of [GdBTP3](NO3)3, [Gd(BTP)3](OTf)3, Gd(NO3)3, Gd(OTf)3 and N K edge of [Gd(BTP)3](NO3)3, Gd(NO3)3 complexes. The pre-edge absorption resonance in Gd L3 edge high-energy resolution X-ray absorption near edge structure spectra (HR-XANES) is explained as arising from 2p3/2 → 4f/5d electronic transitions by calculations with the FEFF9.5 code. Experimental evidence is found for higher electronic density on Gd in [Gd(BTP)3](NO3)3 and [Gd(BTP)3](OTf)3 compared to Gd in Gd(NO3)3 and Gd(OTf)3, and on N in [Gd(BTP)3](NO3)3 compared to n-Pr-BTP. The origin of the pre-edge structure in the N K edge XANES is explained by density functional theory (DFT) with the ORCA code. Results at the N K edge suggest a change in ligand orbital occupancies and mixing upon complexation but further work is necessary to interpret observed spectral variations.

  14. Defining the diverse spectrum of inversions, complex structural variation, and chromothripsis in the morbid human genome.

    PubMed

    Collins, Ryan L; Brand, Harrison; Redin, Claire E; Hanscom, Carrie; Antolik, Caroline; Stone, Matthew R; Glessner, Joseph T; Mason, Tamara; Pregno, Giulia; Dorrani, Naghmeh; Mandrile, Giorgia; Giachino, Daniela; Perrin, Danielle; Walsh, Cole; Cipicchio, Michelle; Costello, Maura; Stortchevoi, Alexei; An, Joon-Yong; Currall, Benjamin B; Seabra, Catarina M; Ragavendran, Ashok; Margolin, Lauren; Martinez-Agosto, Julian A; Lucente, Diane; Levy, Brynn; Sanders, Stephan J; Wapner, Ronald J; Quintero-Rivera, Fabiola; Kloosterman, Wigard; Talkowski, Michael E

    2017-03-06

    Structural variation (SV) influences genome organization and contributes to human disease. However, the complete mutational spectrum of SV has not been routinely captured in disease association studies. We sequenced 689 participants with autism spectrum disorder (ASD) and other developmental abnormalities to construct a genome-wide map of large SV. Using long-insert jumping libraries at 105X mean physical coverage and linked-read whole-genome sequencing from 10X Genomics, we document seven major SV classes at ~5 kb SV resolution. Our results encompass 11,735 distinct large SV sites, 38.1% of which are novel and 16.8% of which are balanced or complex. We characterize 16 recurrent subclasses of complex SV (cxSV), revealing that: (1) cxSV are larger and rarer than canonical SV; (2) each genome harbors 14 large cxSV on average; (3) 84.4% of large cxSVs involve inversion; and (4) most large cxSV (93.8%) have not been delineated in previous studies. Rare SVs are more likely to disrupt coding and regulatory non-coding loci, particularly when truncating constrained and disease-associated genes. We also identify multiple cases of catastrophic chromosomal rearrangements known as chromoanagenesis, including somatic chromoanasynthesis, and extreme balanced germline chromothripsis events involving up to 65 breakpoints and 60.6 Mb across four chromosomes, further defining rare categories of extreme cxSV. These data provide a foundational map of large SV in the morbid human genome and demonstrate a previously underappreciated abundance and diversity of cxSV that should be considered in genomic studies of human disease.

  15. Final report on LDRD project : coupling strategies for multi-physics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian

    Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveragedmore » existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.« less

  16. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  17. Simulation of Shear Alfvén Waves in LAPD using the BOUT++ code

    NASA Astrophysics Data System (ADS)

    Wei, Di; Friedman, B.; Carter, T. A.; Umansky, M. V.

    2011-10-01

    The linear and nonlinear physics of shear Alfvén waves is investigated using the 3D Braginskii fluid code BOUT++. The code has been verified against analytical calculations for the dispersion of kinetic and inertial Alfvén waves. Various mechanisms for forcing Alfvén waves in the code are explored, including introducing localized current sources similar to physical antennas used in experiments. Using this foundation, the code is used to model nonlinear interactions among shear Alfvén waves in a cylindrical magnetized plasma, such as that found in the Large Plasma Device (LAPD) at UCLA. In the future this investigation will allow for examination of the nonlinear interactions between shear Alfvén waves in both laboratory and space plasmas in order to compare to predictions of MHD turbulence.

  18. Recoding Numerics to Geometrics for Complex Discrimination Tasks; A Feasibility Study of Coding Strategy.

    ERIC Educational Resources Information Center

    Simpkins, John D.

    Processing complex multivariate information effectively when relational properties of information sub-groups are ambiguous is difficult for man and man-machine systems. However, the information processing task is made easier through code study, cybernetic planning, and accurate display mechanisms. An exploratory laboratory study designed for the…

  19. The Long Non-coding RNA HOTTIP Enhances Pancreatic Cancer Cell Proliferation, Survival and Migration

    EPA Science Inventory

    ABSTRACTHOTTIP is a long non-coding RNA (lncRNA) transcribed from the 5' tip of the HOXA locus and is associated with the polycomb repressor complex 2 (PRC2) and WD repeat containing protein 5 (WDR5)/mixed lineage leukemia 1 (MLL1) chromatin modifying complexes. HOTTIP is expres...

  20. H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints

    NASA Astrophysics Data System (ADS)

    Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.

    2006-12-01

    This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.

  1. Multi-scale and multi-domain computational astrophysics.

    PubMed

    van Elteren, Arjen; Pelupessy, Inti; Zwart, Simon Portegies

    2014-08-06

    Astronomical phenomena are governed by processes on all spatial and temporal scales, ranging from days to the age of the Universe (13.8 Gyr) as well as from kilometre size up to the size of the Universe. This enormous range in scales is contrived, but as long as there is a physical connection between the smallest and largest scales it is important to be able to resolve them all, and for the study of many astronomical phenomena this governance is present. Although covering all these scales is a challenge for numerical modellers, the most challenging aspect is the equally broad and complex range in physics, and the way in which these processes propagate through all scales. In our recent effort to cover all scales and all relevant physical processes on these scales, we have designed the Astrophysics Multipurpose Software Environment (AMUSE). AMUSE is a Python-based framework with production quality community codes and provides a specialized environment to connect this plethora of solvers to a homogeneous problem-solving environment. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Refining the accuracy of validated target identification through coding variant fine-mapping in type 2 diabetes.

    PubMed

    Mahajan, Anubha; Wessel, Jennifer; Willems, Sara M; Zhao, Wei; Robertson, Neil R; Chu, Audrey Y; Gan, Wei; Kitajima, Hidetoshi; Taliun, Daniel; Rayner, N William; Guo, Xiuqing; Lu, Yingchang; Li, Man; Jensen, Richard A; Hu, Yao; Huo, Shaofeng; Lohman, Kurt K; Zhang, Weihua; Cook, James P; Prins, Bram Peter; Flannick, Jason; Grarup, Niels; Trubetskoy, Vassily Vladimirovich; Kravic, Jasmina; Kim, Young Jin; Rybin, Denis V; Yaghootkar, Hanieh; Müller-Nurasyid, Martina; Meidtner, Karina; Li-Gao, Ruifang; Varga, Tibor V; Marten, Jonathan; Li, Jin; Smith, Albert Vernon; An, Ping; Ligthart, Symen; Gustafsson, Stefan; Malerba, Giovanni; Demirkan, Ayse; Tajes, Juan Fernandez; Steinthorsdottir, Valgerdur; Wuttke, Matthias; Lecoeur, Cécile; Preuss, Michael; Bielak, Lawrence F; Graff, Marielisa; Highland, Heather M; Justice, Anne E; Liu, Dajiang J; Marouli, Eirini; Peloso, Gina Marie; Warren, Helen R; Afaq, Saima; Afzal, Shoaib; Ahlqvist, Emma; Almgren, Peter; Amin, Najaf; Bang, Lia B; Bertoni, Alain G; Bombieri, Cristina; Bork-Jensen, Jette; Brandslund, Ivan; Brody, Jennifer A; Burtt, Noël P; Canouil, Mickaël; Chen, Yii-Der Ida; Cho, Yoon Shin; Christensen, Cramer; Eastwood, Sophie V; Eckardt, Kai-Uwe; Fischer, Krista; Gambaro, Giovanni; Giedraitis, Vilmantas; Grove, Megan L; de Haan, Hugoline G; Hackinger, Sophie; Hai, Yang; Han, Sohee; Tybjærg-Hansen, Anne; Hivert, Marie-France; Isomaa, Bo; Jäger, Susanne; Jørgensen, Marit E; Jørgensen, Torben; Käräjämäki, Annemari; Kim, Bong-Jo; Kim, Sung Soo; Koistinen, Heikki A; Kovacs, Peter; Kriebel, Jennifer; Kronenberg, Florian; Läll, Kristi; Lange, Leslie A; Lee, Jung-Jin; Lehne, Benjamin; Li, Huaixing; Lin, Keng-Hung; Linneberg, Allan; Liu, Ching-Ti; Liu, Jun; Loh, Marie; Mägi, Reedik; Mamakou, Vasiliki; McKean-Cowdin, Roberta; Nadkarni, Girish; Neville, Matt; Nielsen, Sune F; Ntalla, Ioanna; Peyser, Patricia A; Rathmann, Wolfgang; Rice, Kenneth; Rich, Stephen S; Rode, Line; Rolandsson, Olov; Schönherr, Sebastian; Selvin, Elizabeth; Small, Kerrin S; Stančáková, Alena; Surendran, Praveen; Taylor, Kent D; Teslovich, Tanya M; Thorand, Barbara; Thorleifsson, Gudmar; Tin, Adrienne; Tönjes, Anke; Varbo, Anette; Witte, Daniel R; Wood, Andrew R; Yajnik, Pranav; Yao, Jie; Yengo, Loïc; Young, Robin; Amouyel, Philippe; Boeing, Heiner; Boerwinkle, Eric; Bottinger, Erwin P; Chowdhury, Rajiv; Collins, Francis S; Dedoussis, George; Dehghan, Abbas; Deloukas, Panos; Ferrario, Marco M; Ferrières, Jean; Florez, Jose C; Frossard, Philippe; Gudnason, Vilmundur; Harris, Tamara B; Heckbert, Susan R; Howson, Joanna M M; Ingelsson, Martin; Kathiresan, Sekar; Kee, Frank; Kuusisto, Johanna; Langenberg, Claudia; Launer, Lenore J; Lindgren, Cecilia M; Männistö, Satu; Meitinger, Thomas; Melander, Olle; Mohlke, Karen L; Moitry, Marie; Morris, Andrew D; Murray, Alison D; de Mutsert, Renée; Orho-Melander, Marju; Owen, Katharine R; Perola, Markus; Peters, Annette; Province, Michael A; Rasheed, Asif; Ridker, Paul M; Rivadineira, Fernando; Rosendaal, Frits R; Rosengren, Anders H; Salomaa, Veikko; Sheu, Wayne H-H; Sladek, Rob; Smith, Blair H; Strauch, Konstantin; Uitterlinden, André G; Varma, Rohit; Willer, Cristen J; Blüher, Matthias; Butterworth, Adam S; Chambers, John Campbell; Chasman, Daniel I; Danesh, John; van Duijn, Cornelia; Dupuis, Josée; Franco, Oscar H; Franks, Paul W; Froguel, Philippe; Grallert, Harald; Groop, Leif; Han, Bok-Ghee; Hansen, Torben; Hattersley, Andrew T; Hayward, Caroline; Ingelsson, Erik; Kardia, Sharon L R; Karpe, Fredrik; Kooner, Jaspal Singh; Köttgen, Anna; Kuulasmaa, Kari; Laakso, Markku; Lin, Xu; Lind, Lars; Liu, Yongmei; Loos, Ruth J F; Marchini, Jonathan; Metspalu, Andres; Mook-Kanamori, Dennis; Nordestgaard, Børge G; Palmer, Colin N A; Pankow, James S; Pedersen, Oluf; Psaty, Bruce M; Rauramaa, Rainer; Sattar, Naveed; Schulze, Matthias B; Soranzo, Nicole; Spector, Timothy D; Stefansson, Kari; Stumvoll, Michael; Thorsteinsdottir, Unnur; Tuomi, Tiinamaija; Tuomilehto, Jaakko; Wareham, Nicholas J; Wilson, James G; Zeggini, Eleftheria; Scott, Robert A; Barroso, Inês; Frayling, Timothy M; Goodarzi, Mark O; Meigs, James B; Boehnke, Michael; Saleheen, Danish; Morris, Andrew P; Rotter, Jerome I; McCarthy, Mark I

    2018-04-01

    We aggregated coding variant data for 81,412 type 2 diabetes cases and 370,832 controls of diverse ancestry, identifying 40 coding variant association signals (P < 2.2 × 10 -7 ); of these, 16 map outside known risk-associated loci. We make two important observations. First, only five of these signals are driven by low-frequency variants: even for these, effect sizes are modest (odds ratio ≤1.29). Second, when we used large-scale genome-wide association data to fine-map the associated variants in their regional context, accounting for the global enrichment of complex trait associations in coding sequence, compelling evidence for coding variant causality was obtained for only 16 signals. At 13 others, the associated coding variants clearly represent 'false leads' with potential to generate erroneous mechanistic inference. Coding variant associations offer a direct route to biological insight for complex diseases and identification of validated therapeutic targets; however, appropriate mechanistic inference requires careful specification of their causal contribution to disease predisposition.

  3. Flexible climate modeling systems: Lessons from Snowball Earth, Titan and Mars

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, R. T.

    2007-12-01

    Climate models are only useful to the extent that real understanding can be extracted from them. Most leading- edge problems in climate change, paleoclimate and planetary climate require a high degree of flexibility in terms of incorporating model physics -- for example in allowing methane or CO2 to be a condensible substance instead of water vapor. This puts a premium on model design that allows easy modification, and on physical parameterizations that are close to fundamentals with as little empirical ad-hoc formulation as possible. I will provide examples from two approaches to this problem we have been using at the University of Chicago. The first is the FOAM general circulation model, which is a clean single-executable Fortran-77/c code supported by auxiliary applications in Python and Java. The second is a new approach based on using Python as a shell for assembling building blocks in compiled-code into full models. Applications to Snowball Earth, Titan and Mars, as well as pedagogical uses, will be discussed. One painful lesson we have learned is that Fortran-95 is a major impediment to portability and cross-language interoperability; in this light the trend toward Fortran-95 in major modelling groups is seen as a significant step backwards. In this talk, I will focus on modeling projects employing a full representation of atmospheric fluid dynamics, rather than "intermediate complexity" models in which the associated transports are parameterized.

  4. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  5. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  6. The measurement of boundary layers on a compressor blade in cascade. Volume 1: Experimental technique, analysis and results

    NASA Technical Reports Server (NTRS)

    Zierke, William C.; Deutsch, Steven

    1989-01-01

    Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data were acquired at a chord Reynolds number of 500,000 and at three incidence angles. Moreover, these measurements have supplied some physical insight into these very complex flows. Although some natural transition is evident, laminar boundary layers usually detach and subsequently reattach as either fully or intermittently turbulent boundary layers. These transitional separation bubbles play an important role in the development of most of the boundary layers and wakes measured in this cascade and the modeling or computing of these bubbles should prove to be the key aspect in computing the entire cascade flow field. In addition, the nonequilibrium turbulent boundary layers on these highly loaded blades always have some region of separation near the trailing edge of the suction surface. These separated flows, as well as the subsequent near wakes, show no similarity and should prove to be a challenging test for the viscous computational codes.

  7. Codes in the codons: construction of a codon/amino acid periodic table and a study of the nature of specific nucleic acid-protein interactions.

    PubMed

    Benyo, B; Biro, J C; Benyo, Z

    2004-01-01

    The theory of "codon-amino acid coevolution" was first proposed by Woese in 1967. It suggests that there is a stereochemical matching - that is, affinity - between amino acids and certain of the base triplet sequences that code for those amino acids. We have constructed a common periodic table of codons and amino acids, where the nucleic acid table showed perfect axial symmetry for codons and the corresponding amino acid table also displayed periodicity regarding the biochemical properties (charge and hydrophobicity) of the 20 amino acids and the position of the stop signals. The table indicates that the middle (2/sup nd/) amino acid in the codon has a prominent role in determining some of the structural features of the amino acids. The possibility that physical contact between codons and amino acids might exist was tested on restriction enzymes. Many recognition site-like sequences were found in the coding sequences of these enzymes and as many as 73 examples of codon-amino acid co-location were observed in the 7 known 3D structures (December 2003) of endonuclease-nucleic acid complexes. These results indicate that the smallest possible units of specific nucleic acid-protein interaction are indeed the stereochemically compatible codons and amino acids.

  8. Genes uniquely expressed in human growth plate chondrocytes uncover a distinct regulatory network.

    PubMed

    Li, Bing; Balasubramanian, Karthika; Krakow, Deborah; Cohn, Daniel H

    2017-12-20

    Chondrogenesis is the earliest stage of skeletal development and is a highly dynamic process, integrating the activities and functions of transcription factors, cell signaling molecules and extracellular matrix proteins. The molecular mechanisms underlying chondrogenesis have been extensively studied and multiple key regulators of this process have been identified. However, a genome-wide overview of the gene regulatory network in chondrogenesis has not been achieved. In this study, employing RNA sequencing, we identified 332 protein coding genes and 34 long non-coding RNA (lncRNA) genes that are highly selectively expressed in human fetal growth plate chondrocytes. Among the protein coding genes, 32 genes were associated with 62 distinct human skeletal disorders and 153 genes were associated with skeletal defects in knockout mice, confirming their essential roles in skeletal formation. These gene products formed a comprehensive physical interaction network and participated in multiple cellular processes regulating skeletal development. The data also revealed 34 transcription factors and 11,334 distal enhancers that were uniquely active in chondrocytes, functioning as transcriptional regulators for the cartilage-selective genes. Our findings revealed a complex gene regulatory network controlling skeletal development whereby transcription factors, enhancers and lncRNAs participate in chondrogenesis by transcriptional regulation of key genes. Additionally, the cartilage-selective genes represent candidate genes for unsolved human skeletal disorders.

  9. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  10. Ab initio density-functional calculations in materials science: from quasicrystals over microporous catalysts to spintronics.

    PubMed

    Hafner, Jürgen

    2010-09-29

    During the last 20 years computer simulations based on a quantum-mechanical description of the interactions between electrons and atomic nuclei have developed an increasingly important impact on materials science, not only in promoting a deeper understanding of the fundamental physical phenomena, but also enabling the computer-assisted design of materials for future technologies. The backbone of atomic-scale computational materials science is density-functional theory (DFT) which allows us to cast the intractable complexity of electron-electron interactions into the form of an effective single-particle equation determined by the exchange-correlation functional. Progress in DFT-based calculations of the properties of materials and of simulations of processes in materials depends on: (1) the development of improved exchange-correlation functionals and advanced post-DFT methods and their implementation in highly efficient computer codes, (2) the development of methods allowing us to bridge the gaps in the temperature, pressure, time and length scales between the ab initio calculations and real-world experiments and (3) the extension of the functionality of these codes, permitting us to treat additional properties and new processes. In this paper we discuss the current status of techniques for performing quantum-based simulations on materials and present some illustrative examples of applications to complex quasiperiodic alloys, cluster-support interactions in microporous acid catalysts and magnetic nanostructures.

  11. FastChem: A computer program for efficient complex chemical equilibrium calculations in the neutral/ionized gas phase with applications to stellar and planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin

    2018-06-01

    For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.

  12. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  13. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  14. Video coding for 3D-HEVC based on saliency information

    NASA Astrophysics Data System (ADS)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  15. CHARRON: Code for High Angular Resolution of Rotating Objects in Nature

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Zorec, J.; Vakili, F.

    2012-12-01

    Rotation is one of the fundamental physical parameters governing stellar physics and evolution. At the same time, spectrally resolved optical/IR long-baseline interferometry has proven to be an important observing tool to measure many physical effects linked to rotation, in particular, stellar flattening, gravity darkening, differential rotation. In order to interpret the high angular resolution observations from modern spectro-interferometers, such as VLTI/AMBER and VEGA/CHARA, we have developed an interferometry-oriented numerical model: CHARRON (Code for High Angular Resolution of Rotating Objects in Nature). We present here the characteristics of CHARRON, which is faster (≃q10-30 s per model) and thus more adapted to model-fitting than the first version of the code presented by Domiciano de Souza et al. (2002).

  16. Properties of a certain stochastic dynamical system, channel polarization, and polar codes

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshiyuki

    2010-06-01

    A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.

  17. Assessment and Mitigation of Radiation, EMP, Debris & Shrapnel Impacts at Megajoule-Class Laser Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eder, D C; Anderson, R W; Bailey, D S

    2009-10-05

    The generation of neutron/gamma radiation, electromagnetic pulses (EMP), debris and shrapnel at mega-Joule class laser facilities (NIF and LMJ) impacts experiments conducted at these facilities. The complex 3D numerical codes used to assess these impacts range from an established code that required minor modifications (MCNP - calculates neutron and gamma radiation levels in complex geometries), through a code that required significant modifications to treat new phenomena (EMSolve - calculates EMP from electrons escaping from laser targets), to a new code, ALE-AMR, that is being developed through a joint collaboration between LLNL, CEA, and UC (UCSD, UCLA, and LBL) for debrismore » and shrapnel modelling.« less

  18. Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes

    NASA Technical Reports Server (NTRS)

    Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.

    2010-01-01

    The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,

  19. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 2; Codes for AWGN and Fading Channels

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, DoJun; Lin, Shu

    1997-01-01

    In this paper, we will use the construction technique proposed in to construct multidimensional trellis coded modulation (TCM) codes for both the additive white Gaussian noise (AWGN) and the fading channels. Analytical performance bounds and simulation results show that these codes perform very well and achieve significant coding gains over uncoded reference modulation systems. In addition, the proposed technique can be used to construct codes which have a performance/decoding complexity advantage over the codes listed in literature.

  20. Study of shock waves and related phenomena motivated by astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drake, R. P.; Keiter, P. A.; Kuranz, C. C.

    This study discusses the recent research in High-Energy-Density Physics at our Center. Our work in complex hydrodynamics is now focused on mode coupling in the Richtmyer-Meshkov process and on the supersonic Kelvin-Helmholtz instability. These processes are believed to occur in a wide range of astrophysical circumstances. In radiation hydrodynamics, we are studying radiative reverse shocks relevant to cataclysmic variable stars. Our work on magnetized flows seeks to produce magnetized jets and study their interactions. We build the targets for all these experiments, and simulate them using our CRASH code. We also conduct diagnostic research, focused primarily on imaging x-ray spectroscopymore » and its applications to scattering and fluorescence.« less

  1. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  2. Study of shock waves and related phenomena motivated by astrophysics

    DOE PAGES

    Drake, R. P.; Keiter, P. A.; Kuranz, C. C.; ...

    2016-04-01

    This study discusses the recent research in High-Energy-Density Physics at our Center. Our work in complex hydrodynamics is now focused on mode coupling in the Richtmyer-Meshkov process and on the supersonic Kelvin-Helmholtz instability. These processes are believed to occur in a wide range of astrophysical circumstances. In radiation hydrodynamics, we are studying radiative reverse shocks relevant to cataclysmic variable stars. Our work on magnetized flows seeks to produce magnetized jets and study their interactions. We build the targets for all these experiments, and simulate them using our CRASH code. We also conduct diagnostic research, focused primarily on imaging x-ray spectroscopymore » and its applications to scattering and fluorescence.« less

  3. Report from the Integrated Modeling Panel at the Workshop on the Science of Ignition on NIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinak, M; Lamb, D

    2012-07-03

    This section deals with multiphysics radiation hydrodynamics codes used to design and simulate targets in the ignition campaign. These topics encompass all the physical processes they model, and include consideration of any approximations necessary due to finite computer resources. The section focuses on what developments would have the highest impact on reducing uncertainties in modeling most relevant to experimental observations. It considers how the ICF codes should be employed in the ignition campaign. This includes a consideration of how the experiments can be best structured to test the physical models the codes employ.

  4. Integration of Environmental Issues in a Physics Course: 'Physics by Inquiry' High School Teachers' Integration Models and Challenges

    NASA Astrophysics Data System (ADS)

    Kimori, David Abiya

    As we approach the second quarter of the twenty-first century, one may predict that the environment will be among the dominant themes in the political and educational discourse. Over the past three decades, particular perspectives regarding the environment have begun to emerge: (i) realization by human beings that we not only live on earth and use its resources at an increasingly high rate but we also actually belong to the earth and the total ecology of all living systems, (ii) there are strong interactions among different components of the large and complex systems that make up our environment, and (iii) the rising human population and its impact on the environment is a great concern (Hughes & Mason, 2014). Studies have revealed that although the students do not have a deep understanding of environmental issues and lack environmental awareness and attitudes necessary for protecting the environment, they have great concern for the environment (Chapman & Sharma, 2001; Fien, Yencken, & Sykes, 2002). However, addressing environmental issues in the classroom and other disciplines has never been an easy job for teachers (Pennock & Bardwell, 1994; Edelson, 2007). Using multiple case studies, this study investigated how three purposefully selected physics teachers teaching a 'Physics by Inquiry' course integrated environmental topics and issues in their classroom. Particularly this study looked at what integration models and practices the three physics teachers employed in integrating environmental topics and issues in their classroom and what challenges the teachers faced while integrating environmental topics in their classrooms. Data collection methods including field notes taken from observations, teachers' interviews and a collection of artifacts and documents were used. The data were coded analyzed and organized into codes and categories guided by Fogarty (1991) models of curriculum integration and Ham and Sewing (1988) four categories of barriers to environmental education. Findings of this study indicate that teachers acknowledge the importance of teaching environmental issues in their classrooms but continue to struggle with conceptual, educational, logistical and attitudinal barriers to successful integration of environmental topics in physics.

  5. Classroom learning and achievement: how the complexity of classroom interaction impacts students' learning

    NASA Astrophysics Data System (ADS)

    Podschuweit, Sören; Bernholt, Sascha; Brückmann, Maja

    2016-05-01

    Background: Complexity models have provided a suitable framework in various domains to assess students' educational achievement. Complexity is often used as the analytical focus when regarding learning outcomes, i.e. when analyzing written tests or problem-centered interviews. Numerous studies reveal negative correlations between the complexity of a task and the probability of a student solving it. Purpose: Thus far, few detailed investigations explore the importance of complexity in actual classroom lessons. Moreover, the few efforts made so far revealed inconsistencies. Hence, the present study sheds light on the influence the complexity of students' and teachers' class contributions have on students' learning outcomes. Sample: Videos of 10 German 8th grade physics courses covering three consecutive lessons on two topics each (electricity, mechanics) have been analyzed. The sample includes 10 teachers and 290 students. Design and methods: Students' and teachers' verbal contributions were coded manual-based according to the level of complexity. Additionally, pre-post testing of knowledge in electricity and mechanics was applied to assess the students' learning gain. ANOVA analysis was used to characterize the influence of the complexity on the learning gain. Results: Results indicate that the mean level of complexity in classroom contributions explains a large portion of variance in post-test results on class level. Despite this overarching trend, taking classroom activities into account as well reveals even more fine-grained patterns, leading to more specific relations between the complexity in the classroom and students' achievement. Conclusions: In conclusion, we argue for more reflected teaching approaches intended to gradually increase class complexity to foster students' level of competency.

  6. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  7. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  9. Enhanced verification test suite for physics simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  10. micrOMEGAs 2.0: A program to calculate the relic density of dark matter in a generic model

    NASA Astrophysics Data System (ADS)

    Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.

    2007-03-01

    micrOMEGAs 2.0 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non-supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0 Catalogue identifier:ADQR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers for which the program is designed and others on which it has been tested:PC, Alpha, Mac, Sun Operating systems under which the program has been tested:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) Programming language used:C and Fortran Memory required to execute with typical data:17 MB depending on the number of processes required No. of processors used:1 Has the code been vectorized or parallelized:no No. of lines in distributed program, including test data, etc.:91 778 No. of bytes in distributed program, including test data, etc.:1 306 726 Distribution format:tar.gz External routines/libraries used:no Catalogue identifier of previous version:ADQR_v1_3 Journal reference of previous version:Comput. Phys. Comm. 174 (2006) 577 Does the new version supersede the previous version:yes Nature of physical problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Method of solution: In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. Higher-order QCD corrections to Higgs couplings to quark pairs are included. Reasons for the new version:There are many models of new physics that propose a candidate for dark matter besides the much studied minimal supersymmetric standard model. This new version not only incorporates extensions of the MSSM, such as the MSSM with complex phases, or the NMSSM which contains an extra singlet superfield but also gives the possibility for the user to incorporate easily a new model. For this the user only needs to redefine appropriately a new model file. Summary of revisions:Possibility to include in the package any particle physics model with a discrete symmetry that guarantees the stability of the cold dark matter candidate (LOP) and to compute the relic density of CDM. Compute automatically the cross-sections for annihilation of the LOP at small velocities into SM final states and provide the energy spectra for γ,e,p¯,ν final states. For the MSSM with input parameters defined at the GUT scale, the interface with any of the spectrum calculator codes reads an input file in the SUSY Les Houches Accord format (SLHA). Implementation of the MSSM with complex parameters (CPV-MSSM) with an interface to CPsuperH to calculate the spectrum. Routine to calculate the electric dipole moment of the electron in the CPV-MSSM. In the NMSSM, new interface compatible with NMHDECAY2.1. Typical running time:0.2 sec Unusual features of the program:Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.

  11. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2017-12-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  12. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    NASA Astrophysics Data System (ADS)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2018-03-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  13. A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics

    NASA Technical Reports Server (NTRS)

    Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela

    2015-01-01

    Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information

  14. Coupled Physics Environment (CouPE) library - Design, Implementation, and Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay S.

    Over several years, high fidelity, validated mono-­physics solvers with proven scalability on peta-­scale architectures have been developed independently. Based on a unified component-­based architecture, these existing codes can be coupled with a unified mesh-­data backplane and a flexible coupling-­strategy-­based driver suite to produce a viable tool for analysts. In this report, we present details on the design decisions and developments on CouPE, an acronym that stands for Coupled Physics Environment that orchestrates a coupled physics solver through the interfaces exposed by MOAB array-­based unstructured mesh, both of which are part of SIGMA (Scalable Interfaces for Geometry and Mesh-­Based Applications) toolkit.more » The SIGMA toolkit contains libraries that enable scalable geometry and unstructured mesh creation and handling in a memory and computationally efficient implementation. The CouPE version being prepared for a full open-­source release along with updated documentation will contain several useful examples that will enable users to start developing their applications natively using the native MOAB mesh and couple their models to existing physics applications to analyze and solve real world problems of interest. An integrated multi-­physics simulation capability for the design and analysis of current and future nuclear reactor models is also being investigated as part of the NEAMS RPL, to tightly couple neutron transport, thermal-­hydraulics and structural mechanics physics under the SHARP framework. This report summarizes the efforts that have been invested in CouPE to bring together several existing physics applications namely PROTEUS (neutron transport code), Nek5000 (computational fluid-dynamics code) and Diablo (structural mechanics code). The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The design of CouPE along with motivations that led to implementation choices are also discussed. The first release of the library will be different from the current version of the code that integrates the components in SHARP and explanation on the need for forking the source base will also be provided. Enhancements in the functionality and improved user guides will be available as part of the release. CouPE v0.1 is scheduled for an open-­source release in December 2014 along with SIGMA v1.1 components that provide support for language-agnostic mesh loading, traversal and query interfaces along with scalable solution transfer of fields between different physics codes. The coupling methodology and software interfaces of the library are presented, along with verification studies on two representative fast sodium-­cooled reactor demonstration problems to prove the usability of the CouPE library.« less

  15. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  16. Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas (GPS - TTBP) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chame, Jacqueline

    2011-05-27

    The goal of this project is the development of the Gyrokinetic Toroidal Code (GTC) Framework and its applications to problems related to the physics of turbulence and turbulent transport in tokamaks,. The project involves physics studies, code development, noise effect mitigation, supporting computer science efforts, diagnostics and advanced visualizations, verification and validation. Its main scientific themes are mesoscale dynamics and non-locality effects on transport, the physics of secondary structures such as zonal flows, and strongly coherent wave-particle interaction phenomena at magnetic precession resonances. Special emphasis is placed on the implications of these themes for rho-star and current scalings and formore » the turbulent transport of momentum. GTC-TTBP also explores applications to electron thermal transport, particle transport; ITB formation and cross-cuts such as edge-core coupling, interaction of energetic particles with turbulence and neoclassical tearing mode trigger dynamics. Code development focuses on major initiatives in the development of full-f formulations and the capacity to simulate flux-driven transport. In addition to the full-f -formulation, the project includes the development of numerical collision models and methods for coarse graining in phase space. Verification is pursued by linear stability study comparisons with the FULL and HD7 codes and by benchmarking with the GKV, GYSELA and other gyrokinetic simulation codes. Validation of gyrokinetic models of ion and electron thermal transport is pursed by systematic stressing comparisons with fluctuation and transport data from the DIII-D and NSTX tokamaks. The physics and code development research programs are supported by complementary efforts in computer sciences, high performance computing, and data management.« less

  17. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  18. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  19. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  20. 48 CFR 52.204-7 - System for Award Management.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...

  1. 48 CFR 52.204-7 - System for Award Management.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...

  2. CRUNCH_PARALLEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaker, Dana E.; Steefel, Carl I.

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  3. An efficient code for the simulation of nonhydrostatic stratified flow over obstacles

    NASA Technical Reports Server (NTRS)

    Pihos, G. G.; Wurtele, M. G.

    1981-01-01

    The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.

  4. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  5. A Conference on Spacecraft Charging Technology - 1978, held at U.S. Air Force Academy, Colorado Springs, Colorado, October 31 - November 2, 1978.

    DTIC Science & Technology

    1978-01-01

    complex, applications of the code . NASCAP CODE DESCRIPTION The NASCAP code is a finite-element spacecraft-charging simulation that is written in FORTRAN ...transport code POEM (ref. 1), is applicable to arbitrary dielectrics, source spectra, and current time histories. The code calculations are illustrated by...iaxk ’. Vlbouced _DstributionL- 9TNA Availability Codes %ELECTEf Nationa Aeronautics and Dist. Spec al TAvalland/or. MAY 2 21980 Space Administration

  6. Steady-State Ion Beam Modeling with MICHELLE

    NASA Astrophysics Data System (ADS)

    Petillo, John

    2003-10-01

    There is a need to efficiently model ion beam physics for ion implantation, chemical vapor deposition, and ion thrusters. Common to all is the need for three-dimensional (3D) simulation of volumetric ion sources, ion acceleration, and optics, with the ability to model charge exchange of the ion beam with a background neutral gas. The two pieces of physics stand out as significant are the modeling of the volumetric source and charge exchange. In the MICHELLE code, the method for modeling the plasma sheath in ion sources assumes that the electron distribution function is a Maxwellian function of electrostatic potential over electron temperature. Charge exchange is the process by which a neutral background gas with a "fast" charged particle streaming through exchanges its electron with the charged particle. An efficient method for capturing this is essential, and the model presented is based on semi-empirical collision cross section functions. This appears to be the first steady-state 3D algorithm of its type to contain multiple generations of charge exchange, work with multiple species and multiple charge state beam/source particles simultaneously, take into account the self-consistent space charge effects, and track the subsequent fast neutral particles. The solution used by MICHELLE is to combine finite element analysis with particle-in-cell (PIC) methods. The basic physics model is based on the equilibrium steady-state application of the electrostatic particle-in-cell (PIC) approximation employing a conformal computational mesh. The foundation stems from the same basic model introduced in codes such as EGUN. Here, Poisson's equation is used to self-consistently include the effects of space charge on the fields, and the relativistic Lorentz equation is used to integrate the particle trajectories through those fields. The presentation will consider the complexity of modeling ion thrusters.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A; Barnard, J J; Briggs, R J

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaboration of LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity 'tilt' to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of warm dense matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven inertial fusion energymore » (IFE). These goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned advanced test accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates an {approx} 30 nC pulse of Li{sup +} ions to {approx} 3 MeV, then compresses it to {approx} 1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using an interactive one-dimensional kinetic simulation model and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.« less

  8. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  9. Integrated modelling framework for short pulse high energy density physics experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.; Ramsay, M. G.

    2016-03-01

    Modelling experimental campaigns on the Orion laser at AWE, and developing a viable point-design for fast ignition (FI), calls for a multi-scale approach; a complete description of the problem would require an extensive range of physics which cannot realistically be included in a single code. For modelling the laser-plasma interaction (LPI) we need a fine mesh which can capture the dispersion of electromagnetic waves, and a kinetic model for each plasma species. In the dense material of the bulk target, away from the LPI region, collisional physics dominates. The transport of hot particles generated by the action of the laser is dependent on their slowing and stopping in the dense material and their need to draw a return current. These effects will heat the target, which in turn influences transport. On longer timescales, the hydrodynamic response of the target will begin to play a role as the pressure generated from isochoric heating begins to take effect. Recent effort at AWE [1] has focussed on the development of an integrated code suite based on: the particle in cell code EPOCH, to model LPI; the Monte-Carlo electron transport code THOR, to model the onward transport of hot electrons; and the radiation hydrodynamics code CORVUS, to model the hydrodynamic response of the target. We outline the methodology adopted, elucidate on the advantages of a robustly integrated code suite compared to a single code approach, demonstrate the integrated code suite's application to modelling the heating of buried layers on Orion, and assess the potential of such experiments for the validation of modelling capability in advance of more ambitious HEDP experiments, as a step towards a predictive modelling capability for FI.

  10. Designing and maintaining an effective chargemaster.

    PubMed

    Abbey, D C

    2001-03-01

    The chargemaster is the central repository of charges and associated coding information used to develop claims. But this simple description belies the chargemaster's true complexity. The chargemaster's role in the coding process differs from department to department, and not all codes provided on a claim form are necessarily included in the chargemaster, as codes for complex services may need to be developed and reviewed by coding staff. In addition, with the rise of managed care, the chargemaster increasingly is being used to track utilization of supplies and services. To ensure that the chargemaster performs all of its functions effectively, hospitals should appoint a chargemaster coordinator, supported by a chargemaster review team, to oversee the design and maintenance of the chargemaster. Important design issues that should be considered include the principle of "form follows function," static versus dynamic coding, how modifiers should be treated, how charges should be developed, how to incorporate physician fee schedules into the chargemaster, the interface between the chargemaster and cost reports, and how to include statistical information for tracking utilization.

  11. PCR-free quantitative detection of genetically modified organism from raw materials. An electrochemiluminescence-based bio bar code method.

    PubMed

    Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R

    2008-05-15

    A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.

  12. Coding efficiency of AVS 2.0 for CBAC and CABAC engines

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik

    2015-12-01

    In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.

  13. "ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SANTHI, NANDAKISHORE

    We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relativemore » error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.« less

  14. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)

    2001-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.

  15. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugfer, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  16. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  17. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  18. Implicit SPH v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro

    2016-11-09

    ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.

  19. Large Eddy Simulation of Engineering Flows: A Bill Reynolds Legacy.

    NASA Astrophysics Data System (ADS)

    Moin, Parviz

    2004-11-01

    The term, Large eddy simulation, LES, was coined by Bill Reynolds, thirty years ago when he and his colleagues pioneered the introduction of LES in the engineering community. Bill's legacy in LES features his insistence on having a proper mathematical definition of the large scale field independent of the numerical method used, and his vision for using numerical simulation output as data for research in turbulence physics and modeling, just as one would think of using experimental data. However, as an engineer, Bill was pre-dominantly interested in the predictive capability of computational fluid dynamics and in particular LES. In this talk I will present the state of the art in large eddy simulation of complex engineering flows. Most of this technology has been developed in the Department of Energy's ASCI Program at Stanford which was led by Bill in the last years of his distinguished career. At the core of this technology is a fully implicit non-dissipative LES code which uses unstructured grids with arbitrary elements. A hybrid Eulerian/ Largangian approach is used for multi-phase flows, and chemical reactions are introduced through dynamic equations for mixture fraction and reaction progress variable in conjunction with flamelet tables. The predictive capability of LES is demonstrated in several validation studies in flows with complex physics and complex geometry including flow in the combustor of a modern aircraft engine. LES in such a complex application is only possible through efficient utilization of modern parallel super-computers which was recognized and emphasized by Bill from the beginning. The presentation will include a brief mention of computer science efforts for efficient implementation of LES.

  20. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  1. Feasibility of self-correcting quantum memory and thermal stability of topological order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Beni, E-mail: rouge@mit.edu

    2011-10-15

    Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less

  2. Creating Semantic Waves: Using Legitimation Code Theory as a Tool to Aid the Teaching of Chemistry

    ERIC Educational Resources Information Center

    Blackie, Margaret A. L.

    2014-01-01

    This is a conceptual paper aimed at chemistry educators. The purpose of this paper is to illustrate the use of the semantic code of Legitimation Code Theory in chemistry teaching. Chemistry is an abstract subject which many students struggle to grasp. Legitimation Code Theory provides a way of separating out abstraction from complexity both of…

  3. [ENT and head and neck surgery in the German DRG system 2007].

    PubMed

    Franz, D; Roeder, N; Hörmann, K; Alberty, J

    2007-07-01

    The German DRG system has been further developed into version 2007. For ENT and head and neck surgery, significant changes in the coding of diagnoses and medical operations as well as in the the DRG structure have been made. New ICD codes for sleep apnoea and acquired tracheal stenosis have been implemented. Surgery on the acoustic meatus, removal of auricle hyaline cartilage for transplantation (e. g. rhinosurgery) and tonsillotomy have been coded in the 2007 version. In addition, the DRG structure has been improved. Case allocation of more than one significant operation has been established. The G-DRG system has gained in complexity. High demands are made on the coding of complex cases, whereas standard cases require mostly only one specific diagnosis and one specific OPS code. The quality of case allocation for ENT patients within the G-DRG system has been improved. Nevertheless, further adjustments of the G-DRG system are necessary.

  4. TOUGH-RBSN simulator for hydraulic fracture propagation within fractured media: Model validations against laboratory experiments

    NASA Astrophysics Data System (ADS)

    Kim, Kunhwi; Rutqvist, Jonny; Nakagawa, Seiji; Birkholzer, Jens

    2017-11-01

    This paper presents coupled hydro-mechanical modeling of hydraulic fracturing processes in complex fractured media using a discrete fracture network (DFN) approach. The individual physical processes in the fracture propagation are represented by separate program modules: the TOUGH2 code for multiphase flow and mass transport based on the finite volume approach; and the rigid-body-spring network (RBSN) model for mechanical and fracture-damage behavior, which are coupled with each other. Fractures are modeled as discrete features, of which the hydrological properties are evaluated from the fracture deformation and aperture change. The verification of the TOUGH-RBSN code is performed against a 2D analytical model for single hydraulic fracture propagation. Subsequently, modeling capabilities for hydraulic fracturing are demonstrated through simulations of laboratory experiments conducted on rock-analogue (soda-lime glass) samples containing a designed network of pre-existing fractures. Sensitivity analyses are also conducted by changing the modeling parameters, such as viscosity of injected fluid, strength of pre-existing fractures, and confining stress conditions. The hydraulic fracturing characteristics attributed to the modeling parameters are investigated through comparisons of the simulation results.

  5. TDRSS telecommunications system, PN code analysis

    NASA Technical Reports Server (NTRS)

    Dixon, R.; Gold, R.; Kaiser, F.

    1976-01-01

    The pseudo noise (PN) codes required to support the TDRSS telecommunications services are analyzed and the impact of alternate coding techniques on the user transponder equipment, the TDRSS equipment, and all factors that contribute to the acquisition and performance of these telecommunication services is assessed. Possible alternatives to the currently proposed hybrid FH/direct sequence acquisition procedures are considered and compared relative to acquisition time, implementation complexity, operational reliability, and cost. The hybrid FH/direct sequence technique is analyzed and rejected in favor of a recommended approach which minimizes acquisition time and user transponder complexity while maximizing probability of acquisition and overall link reliability.

  6. Flowgen: Flowchart-based documentation for C + + codes

    NASA Astrophysics Data System (ADS)

    Kosower, David A.; Lopez-Villarejo, J. J.

    2015-11-01

    We present the Flowgen tool, which generates flowcharts from annotated C + + source code. The tool generates a set of interconnected high-level UML activity diagrams, one for each function or method in the C + + sources. It provides a simple and visual overview of complex implementations of numerical algorithms. Flowgen is complementary to the widely-used Doxygen documentation tool. The ultimate aim is to render complex C + + computer codes accessible, and to enhance collaboration between programmers and algorithm or science specialists. We describe the tool and a proof-of-concept application to the VINCIA plug-in for simulating collisions at CERN's Large Hadron Collider.

  7. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  8. The LIFE Cognition Study: design and baseline characteristics

    PubMed Central

    Sink, Kaycee M; Espeland, Mark A; Rushing, Julia; Castro, Cynthia M; Church, Timothy S; Cohen, Ronald; Gill, Thomas M; Henkin, Leora; Jennings, Janine M; Kerwin, Diana R; Manini, Todd M; Myers, Valerie; Pahor, Marco; Reid, Kieran F; Woolard, Nancy; Rapp, Stephen R; Williamson, Jeff D

    2014-01-01

    Observational studies have shown beneficial relationships between exercise and cognitive function. Some clinical trials have also demonstrated improvements in cognitive function in response to moderate–high intensity aerobic exercise; however, these have been limited by relatively small sample sizes and short durations. The Lifestyle Interventions and Independence for Elders (LIFE) Study is the largest and longest randomized controlled clinical trial of physical activity with cognitive outcomes, in older sedentary adults at increased risk for incident mobility disability. One LIFE Study objective is to evaluate the effects of a structured physical activity program on changes in cognitive function and incident all-cause mild cognitive impairment or dementia. Here, we present the design and baseline cognitive data. At baseline, participants completed the modified Mini Mental Status Examination, Hopkins Verbal Learning Test, Digit Symbol Coding, Modified Rey–Osterrieth Complex Figure, and a computerized battery, selected to be sensitive to changes in speed of processing and executive functioning. During follow up, participants completed the same battery, along with the Category Fluency for Animals, Boston Naming, and Trail Making tests. The description of the mild cognitive impairment/dementia adjudication process is presented here. Participants with worse baseline Short Physical Performance Battery scores (prespecified at ≤7) had significantly lower median cognitive test scores compared with those having scores of 8 or 9 with modified Mini Mental Status Examination score of 91 versus (vs) 93, Hopkins Verbal Learning Test delayed recall score of 7.4 vs 7.9, and Digit Symbol Coding score of 45 vs 48, respectively (all P<0.001). The LIFE Study will contribute important information on the effects of a structured physical activity program on cognitive outcomes in sedentary older adults at particular risk for mobility impairment. In addition to its importance in the area of prevention of cognitive decline, the LIFE Study will also likely serve as a model for exercise and other behavioral intervention trials in older adults. PMID:25210447

  9. Design of Soil Salinity Policies with Tinamit, a Flexible and Rapid Tool to Couple Stakeholder-Built System Dynamics Models with Physically-Based Models

    NASA Astrophysics Data System (ADS)

    Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with limited technical background.

  10. PlasmaPy: initial development of a Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas; Leonard, Andrew J.; Stańczak, Dominik; Haggerty, Colby C.; Parashar, Tulasi N.; Huang, Yu-Min; PlasmaPy Community

    2017-10-01

    We report on initial development of PlasmaPy: an open source community-driven Python package for plasma physics. PlasmaPy seeks to provide core functionality that is needed for the formation of a fully open source Python ecosystem for plasma physics. PlasmaPy prioritizes code readability, consistency, and maintainability while using best practices for scientific computing such as version control, continuous integration testing, embedding documentation in code, and code review. We discuss our current and planned capabilities, including features presently under development. The development roadmap includes features such as fluid and particle simulation capabilities, a Grad-Shafranov solver, a dispersion relation solver, atomic data retrieval methods, and tools to analyze simulations and experiments. We describe several ways to contribute to PlasmaPy. PlasmaPy has a code of conduct and is being developed under a BSD license, with a version 0.1 release planned for 2018. The success of PlasmaPy depends on active community involvement, so anyone interested in contributing to this project should contact the authors. This work was partially supported by the U.S. Department of Energy.

  11. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    NASA Astrophysics Data System (ADS)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.

  12. Multiphase Dynamics of Magma Oceans

    NASA Astrophysics Data System (ADS)

    Boukaré, Charles-Edouard; Ricard, Yanick; Parmentier, Edgar M.

    2017-04-01

    Since the earliest study of the Apollo lunar samples, the magma ocean hypothesis has received increasing consideration for explaining the early evolution of terrestrial planets. Giant impacts seem to be able to melt significantly large planets at the end of their accretion. The evolution of the resulting magma ocean would set the initial conditions (thermal and compositionnal structure) for subsequent long-term solid-state planet dynamics. However, magma ocean dynamics remains poorly understood. The major challenge relies on understanding interactions between the physical properties of materials (e.g., viscosity (at liquid or solid state), buoyancy) and the complex dynamics of an extremely vigorously convecting system. Such complexities might be neglected in cases where liquidus/adiabat interactions and density stratification leads to stable situations. However, interesting possibilities arise when exploring magma ocean dynamics in other regime. In the case of the Earth, recent studies have shown that the liquidus might intersect the adiabat at mid-mantle depth and/or that solids might be buoyant at deep mantle conditions. These results require the consideration of more sophisticated scenarios. For instance, how does bottom-up crystallization look with buoyant crystals? To understand this complex dynamics, we develop a multiphase phase numerical code that can handle simultaneously phase change, the convection in each phase and in the slurry, as well as the compaction or decompaction of the two phases. Although our code can only run in a limited parameter range (Rayleigh number, viscosity contrast between phases, Prandlt number), it provides a rich dynamics that illustrates what could have happened. For a given liquidus/adiabat configuration and density contrast between melt and solid, we explore magma ocean scenarios by varying the relative timescales of three first order processes: solid-liquid separation, thermo-chemical convective motions and magma ocean cooling.

  13. Collisionless stellar hydrodynamics as an efficient alternative to N-body methods

    NASA Astrophysics Data System (ADS)

    Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard

    2013-01-01

    The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.

  14. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  15. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less

  16. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  17. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  18. Developing an eBook-Integrated High-Fidelity Mobile App Prototype for Promoting Child Motor Skills and Taxonomically Assessing Children's Emotional Responses Using Face and Sound Topology.

    PubMed

    Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe

    2014-01-01

    Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce "StorySense", an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children's motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage "low-motor" interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child's gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism.

  19. Constraining heat-transport models by comparison to experimental data in a NIF hohlraum

    NASA Astrophysics Data System (ADS)

    Farmer, W. A.; Jones, O. S.; Barrios Garcia, M. A.; Koning, J. M.; Kerbel, G. D.; Strozzi, D. J.; Hinkel, D. E.; Moody, J. D.; Suter, L. J.; Liedahl, D. A.; Moore, A. S.; Landen, O. L.

    2017-10-01

    The accurate simulation of hohlraum plasma conditions is important for predicting the partition of energy and the symmetry of the x-ray field within a hohlraum. Electron heat transport within the hohlraum plasma is difficult to model due to the complex interaction of kinetic plasma effects, magnetic fields, laser-plasma interactions, and microturbulence. Here, we report simulation results using the radiation-hydrodynamic code, HYDRA, utilizing various physics packages (e.g., nonlocal Schurtz model, MHD, flux limiters) and compare to data from hohlraum plasma experiments which contain a Mn-Co tracer dot. In these experiments, the dot is placed in various positions in the hohlraum in order to assess the spatial variation of plasma conditions. Simulated data is compared to a variety of experimental diagnostics. Conclusions are given concerning how the experimental data does and does not constrain the physics models examined. This work was supported by the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  20. Secure communications using nonlinear silicon photonic keys.

    PubMed

    Grubel, Brian C; Bosworth, Bryan T; Kossey, Michael R; Cooper, A Brinton; Foster, Mark A; Foster, Amy C

    2018-02-19

    We present a secure communication system constructed using pairs of nonlinear photonic physical unclonable functions (PUFs) that harness physical chaos in integrated silicon micro-cavities. Compared to a large, electronically stored one-time pad, our method provisions large amounts of information within the intrinsically complex nanostructure of the micro-cavities. By probing a micro-cavity with a rapid sequence of spectrally-encoded ultrafast optical pulses and measuring the lightwave responses, we experimentally demonstrate the ability to extract 2.4 Gb of key material from a single micro-cavity device. Subsequently, in a secure communication experiment with pairs of devices, we achieve bit error rates below 10 -5 at code rates of up to 0.1. The PUFs' responses are never transmitted over the channel or stored in digital memory, thus enhancing the security of the system. Additionally, the micro-cavity PUFs are extremely small, inexpensive, robust, and fully compatible with telecommunications infrastructure, components, and electronic fabrication. This approach can serve one-time pad or public key exchange applications where high security is required.

Top