Science.gov

Sample records for accurate multiphysics thermo-fluid

  1. Multiphysics Thermal-Fluid Analysis of a Non-Nuclear Tester for Hot-Hydrogen Materials Development

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Foote, John; Litchford, Ron

    2006-01-01

    The objective of this effort is to analyze the thermal field of a non-nuclear tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, radiative and conjugate heat transfers.

  2. Computational thermo-fluid analysis of a disk brake

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kuraishi, Takashi; Tabata, Shinichiro; Takagi, Hirokazu

    2016-06-01

    We present computational thermo-fluid analysis of a disk brake, including thermo-fluid analysis of the flow around the brake and heat conduction analysis of the disk. The computational challenges include proper representation of the small-scale thermo-fluid behavior, high-resolution representation of the thermo-fluid boundary layers near the spinning solid surfaces, and bringing the heat transfer coefficient (HTC) calculated in the thermo-fluid analysis of the flow to the heat conduction analysis of the spinning disk. The disk brake model used in the analysis closely represents the actual configuration, and this adds to the computational challenges. The components of the method we have developed for computational analysis of the class of problems with these types of challenges include the Space-Time Variational Multiscale method for coupled incompressible flow and thermal transport, ST Slip Interface method for high-resolution representation of the thermo-fluid boundary layers near spinning solid surfaces, and a set of projection methods for different parts of the disk to bring the HTC calculated in the thermo-fluid analysis. With the HTC coming from the thermo-fluid analysis of the flow around the brake, we do the heat conduction analysis of the disk, from the start of the breaking until the disk spinning stops, demonstrating how the method developed works in computational analysis of this complex and challenging problem.

  3. Multiphysics Nuclear Thermal Rocket Thrust Chamber Analysis

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    2005-01-01

    The objective of this effort is t o develop an efficient and accurate thermo-fluid computational methodology to predict environments for hypothetical thrust chamber design and analysis. The current task scope is to perform multidimensional, multiphysics analysis of thrust performance and heat transfer analysis for a hypothetical solid-core, nuclear thermal engine including thrust chamber and nozzle. The multiphysics aspects of the model include: real fluid dynamics, chemical reactivity, turbulent flow, and conjugate heat transfer. The model will be designed to identify thermal, fluid, and hydrogen environments in all flow paths and materials. This model would then be used to perform non- nuclear reproduction of the flow element failures demonstrated in the Rover/NERVA testing, investigate performance of specific configurations and assess potential issues and enhancements. A two-pronged approach will be employed in this effort: a detailed analysis of a multi-channel, flow-element, and global modeling of the entire thrust chamber assembly with a porosity modeling technique. It is expected that the detailed analysis of a single flow element would provide detailed fluid, thermal, and hydrogen environments for stress analysis, while the global thrust chamber assembly analysis would promote understanding of the effects of hydrogen dissociation and heat transfer on thrust performance. These modeling activities will be validated as much as possible by testing performed by other related efforts.

  4. Standardization of Thermo-Fluid Modeling in Modelica.Fluid

    SciTech Connect

    Franke, Rudiger; Casella, Francesco; Sielemann, Michael; Proelss, Katrin; Otter, Martin; Wetter, Michael

    2009-09-01

    This article discusses the Modelica.Fluid library that has been included in the Modelica Standard Library 3.1. Modelica.Fluid provides interfaces and basic components for the device-oriented modeling of onedimensional thermo-fluid flow in networks containing vessels, pipes, fluid machines, valves and fittings. A unique feature of Modelica.Fluid is that the component equations and the media models as well as pressure loss and heat transfer correlations are decoupled from each other. All components are implemented such that they can be used for media from the Modelica.Media library. This means that an incompressible or compressible medium, a single or a multiple substance medium with one or more phases might be used with one and the same model as long as the modeling assumptions made hold. Furthermore, trace substances are supported. Modeling assumptions can be configured globally in an outer System object. This covers in particular the initialization, uni- or bi-directional flow, and dynamic or steady-state formulation of mass, energy, and momentum balance. All assumptions can be locally refined for every component. While Modelica.Fluid contains a reasonable set of component models, the goal of the library is not to provide a comprehensive set of models, but rather to provide interfaces and best practices for the treatment of issues such as connector design and implementation of energy, mass and momentum balances. Applications from various domains are presented.

  5. Multiphysics Thermal-Fluid Design Analysis of a Non-Nuclear Tester for Hot-Hydrogen Materials and Component Development

    SciTech Connect

    Wang, T.-S.; Foote, John; Litchford, Ron

    2006-01-20

    The objective of this effort is to perform design analyses for a non-nuclear hot-hydrogen materials tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, and thermal radiative heat transfers. The goals of the design analyses are to maintain maximum hot-hydrogen jet impingement energy and to minimize chamber wall heating. The results of analyses on three test fixture configurations and the rationale for final selection are presented. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.

  6. Multiphysics Thermal-Fluid Design Analysis of a Non-Nuclear Tester for Hot-Hydrogen Materials and Component Development

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Foote, John; Litchford, Ron

    2006-01-01

    The objective of this effort is to perform design analyses for a non-nuclear hot-hydrogen materials tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, and thermal radiative heat transfers. The goals of the design analyses are to maintain maximum hot-hydrogen jet impingement energy and to minimize chamber wall heating. The results of analyses on three test fixture configurations and the rationale for final selection are presented. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.

  7. Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.

  8. PREFACE: 32nd UIT (Italian Union of Thermo-fluid-dynamics) Heat Transfer Conference

    NASA Astrophysics Data System (ADS)

    2014-11-01

    The annual Conference of the ''Unione Italiana di Termofluidodinamica'' (UIT) aims to promote cooperation in the field of heat transfer and thermal sciences by bringing together scientists and engineers working in related areas. The 32nd UIT Conference was held in Pisa, from the 23rd to the 25th of June, 2014 in the buildings of the School of Engineering, just a few months after the celebration of the 100th anniversary of the first Institution of the School of Engineering at the University of Pisa. The response was very good, with more than 100 participants and 80 high-quality contributions from 208 authors on seven different heat transfer related topics: Heat transfer and efficiency in energy systems, environmental technologies, and buildings (25 papers); Micro and nano scale thermo-fluid dynamics (9 papers); Multi-phase fluid dynamics, heat transfer and interface phenomena (14 papers); Computational fluid dynamics and heat transfer (10 papers); Heat transfer in nuclear plants (8 papers); Natural, forced and mixed convection (10 papers) and Conduction and radiation (4 papers). To encourage the debate, the Conference Program scheduled 16 oral sessions (44 papers), three ample poster sessions (36 papers) and four invited lectures given by experts in the various fields both from Industry and from University. Keynote Lectures were given by Dr. Roberto Parri (ENEL, Italy), Prof. Peter Stephan (TU Darmstadt, Germany), Prof. Bruno Panella (Politecnico di Torino), and Prof. Sara Rainieri (Universit;aacute; di Parma). This special volume collects a selection of the scientific contributions discussed during this conference. A total of 46 contributions, two keynote lectures and 44 papers both from oral and poster sessions, have been selected for publication in this special issue, after a second accurate revision process. These works give a good overview of the state of the art of Italian research in the field of Heat Transfer related topics at the date. The editors of the

  9. Generalized Fluid System Simulation Program (GFSSP) Version 6 - General Purpose Thermo-Fluid Network Analysis Software

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul

    2011-01-01

    GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.

  10. CFD Multiphysics Tool

    NASA Technical Reports Server (NTRS)

    Perrell, Eric R.

    2005-01-01

    The recent bold initiatives to expand the human presence in space require innovative approaches to the design of propulsion systems whose underlying technology is not yet mature. The space propulsion community has identified a number of candidate concepts. A short list includes solar sails, high-energy-density chemical propellants, electric and electromagnetic accelerators, solar-thermal and nuclear-thermal expanders. For each of these, the underlying physics are relatively well understood. One could easily cite authoritative texts, addressing both the governing equations, and practical solution methods for, e.g. electromagnetic fields, heat transfer, radiation, thermophysics, structural dynamics, particulate kinematics, nuclear energy, power conversion, and fluid dynamics. One could also easily cite scholarly works in which complete equation sets for any one of these physical processes have been accurately solved relative to complex engineered systems. The Advanced Concepts and Analysis Office (ACAO), Space Transportation Directorate, NASA Marshall Space Flight Center, has recently released the first alpha version of a set of computer utilities for performing the applicable physical analyses relative to candidate deep-space propulsion systems such as those listed above. PARSEC, Preliminary Analysis of Revolutionary in-Space Engineering Concepts, enables rapid iterative calculations using several physics tools developed in-house. A complete cycle of the entire tool set takes about twenty minutes. PARSEC is a level-zero/level-one design tool. For PARSEC s proof-of-concept, and preliminary design decision-making, assumptions that significantly simplify the governing equation sets are necessary. To proceed to level-two, one wishes to retain modeling of the underlying physics as close as practical to known applicable first principles. This report describes results of collaboration between ACAO, and Embry-Riddle Aeronautical University (ERAU), to begin building a set of

  11. Multiphysics simulations for LWR analysis

    SciTech Connect

    Hamilton, S.; Clarno, K.; Berrill, M.; Evans, T.; Davidson, G.; Lefebvre, R.; Sampath, R.; Hansel, J.; Ragusa, J.; Josey, C.

    2013-07-01

    Accurate prediction of the neutron and temperature distributions within an operating nuclear reactor requires the solution of multiple coupled physics equations. In a light water reactor (LWR), there is a very strong coupling between the power distribution (described by the radiation transport equation) and the temperature and density distributions (described by a thermal diffusion equation in combination with a fluid flow model). This study aims to begin to quantify the impact of such feedback mechanisms as well as identify numerical difficulties associated with such multiphysics problems. A description of the multiphysics model and current solution strategy within the Exnihilo code package for coupling between 3-D radiation transport and 3-D heat transfer is given. Numerical results detailing the effects of varying the nature of the coupling and the impact of mesh refinement for a representative 3x3 pressurized water reactor (PWR) 'mini-assembly' are presented. (authors)

  12. PREFACE: 33rd UIT (Italian Union of Thermo-fluid dynamics) Heat Transfer Conference

    NASA Astrophysics Data System (ADS)

    Paoletti, Domenica; Ambrosini, Dario; Sfarra, Stefano

    2015-11-01

    The 33rd UIT (Italian Union of Thermo-Fluid Dynamics) Heat Transfer Conference was organized by the Dept. of Industrial and Information Engineering and Economics, University of L'Aquila (Italy) and was held at the Engineering Campus of Monteluco di Roio, L'Aquila, June 22-24, 2015. The annual UIT conference, which has grown over time, came back to L'Aquila after 21 years. The scope of the conference covers a range of major topics in theoretical, numerical and experimental heat transfer and related areas, ranging from energy efficiency to nuclear plants. This year, there was an emphasis on IR thermography, which is growing in importance both in scientific research and industrial applications. 2015 is also the International Year of Light. The Organizing Committee honored this event by introducing a new section, Technical Seminars, which in this edition was mainly devoted to optical flow visualization (also the subject of three different national workshops organized in L'Aquila by UIT in 2003, 2005 and 2008). The conference was held in the recently repaired Engineering buildings, six years after the 2009 earthquake and 50 years after the beginning of the Engineering courses in L'Aquila. Despite some logistical difficulties, 92 papers were submitted by about 270 authors, on eight different topics: heat transfer and efficiency in energy systems, environmental technologies and buildings (32 papers); micro and nano scale thermo-fluid dynamics (5 papers); multi-phase fluid dynamics, heat transfer and interface phenomena (16 papers); computational fluid dynamics and heat transfer (15 papers); heat transfer in nuclear plants (6 papers); natural, forced and mixed convection (6 papers); IR thermography (4 papers); conduction and radiation (3 papers). The conference program scheduled plenary, oral and poster sessions. The three invited plenary Keynote Lectures were given by Prof. Antonio Barletta (University of Bologna, Italy), Prof. Jean-Christophe Batsale (Arts et Metiers

  13. Effects of finiteness on the thermo-fluid-dynamics of natural convection above horizontal plates

    NASA Astrophysics Data System (ADS)

    Guha, Abhijit; Sengupta, Sayantan

    2016-06-01

    A rigorous and systematic computational and theoretical study, the first of its kind, for the laminar natural convective flow above rectangular horizontal surfaces of various aspect ratios ϕ (from 1 to ∞) is presented. Two-dimensional computational fluid dynamic (CFD) simulations (for ϕ → ∞) and three-dimensional CFD simulations (for 1 ≤ ϕ < ∞) are performed to establish and elucidate the role of finiteness of the horizontal planform on the thermo-fluid-dynamics of natural convection. Great care is taken here to ensure grid independence and domain independence of the presented solutions. The results of the CFD simulations are compared with experimental data and similarity theory to understand how the existing simplified results fit, in the appropriate limiting cases, with the complex three-dimensional solutions revealed here. The present computational study establishes the region of a high-aspect-ratio planform over which the results of the similarity theory are approximately valid, the extent of this region depending on the Grashof number. There is, however, a region near the edge of the plate and another region near the centre of the plate (where a plume forms) in which the similarity theory results do not apply. The sizes of these non-compliance zones decrease as the Grashof number is increased. The present study also shows that the similarity velocity profile is not strictly obtained at any location over the plate because of the entrainment effect of the central plume. The 3-D CFD simulations of the present paper are coordinated to clearly reveal the separate and combined effects of three important aspects of finiteness: the presence of leading edges, the presence of planform centre, and the presence of physical corners in the planform. It is realised that the finiteness due to the presence of physical corners in the planform arises only for a finite value of ϕ in the case of 3-D CFD simulations (and not in 2-D CFD simulations or similarity theory

  14. Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.

  15. Multiphysics Application Coupling Toolkit

    SciTech Connect

    Campbell, Michael T.

    2013-12-02

    This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems; with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.

  16. Multiphysics Application Coupling Toolkit

    2013-12-02

    This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, openmore » source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems; with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.« less

  17. Unstructured Finite Volume Computational Thermo-Fluid Dynamic Method for Multi-Disciplinary Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    1998-01-01

    This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.

  18. Mingus Discontinuous Multiphysics

    2014-05-13

    Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems withmore » an integrated image correlation capability that can be used for experimental validation and model« less

  19. Mingus Discontinuous Multiphysics

    SciTech Connect

    Pat Notz, Dan Turner

    2014-05-13

    Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model

  20. Thermo-fluid dynamics and corrosion analysis of a self cooled lead lithium blanket for the HiPER reactor

    NASA Astrophysics Data System (ADS)

    Juárez, R.; Zanzi, C.; Hernández, J.; Sanz, J.

    2015-09-01

    The HiPER reactor is the HiPER project phase devoted to power production. To reach a preliminary reactor design, tritium breeding schemes need to be adapted to the HiPER project technologies selection: direct drive ignition, 150 \\text{MJ}/\\text{shot}× 10 Hz of power released through fusion reactions, and the dry first wall scheme. In this paper we address the main challenge of the HiPER EUROFER-based self cooled lead lithium blanket, which is related to the corrosive behavior of Pb-15.7Li in contact with EUROFER. We evaluate the cooling and corrosion behavior of the so-called separated first wall blanket (SFWB) configuration by performing thermo-fluid dynamics simulations using a large eddy simulation approach. Despite the expected improvement over the integrated first wall blanket, we still find an unsatisfactory cooling performance, expressed as a low outlet Pb-15.7Li temperature plus too high corrosion rates derived from local Pb-15.7Li high temperature and velocity, which can mainly be attributed to the geometry of the channels. Nevertheless, the analysis allowed us to devise future modifications of the SFWB to overcome the limitations found with the present design.

  1. PREFACE: 31st UIT (Italian Union of Thermo-fluid-dynamics) Heat Transfer Conference 2013

    NASA Astrophysics Data System (ADS)

    Vitali, Luigi; Niro, Alfonso; Colombo, Luigi; Sotgia, Giorgio

    2014-04-01

    The annual Conference of the ''Unione Italiana di Termofluidodinamica'' (UIT) aims at promoting cooperation in the field of heat transfer and thermal sciences, by bringing together scientists and engineers working in related areas. The 31st UIT Conference was held in Moltrasio (Como), Italy, 25-27 June, 2013 at the Grand Hotel Imperiale. The response has been enthusiastic, with more than 70 quality contributions from 224 authors on heat transfer related topics: natural, forced and mixed convection, conduction, radiation, multi-phase fluid dynamics and interface phenomena, computational fluid dynamics, micro- and nano-scales, efficiency in energy systems, environmental technologies and buildings. To encourage the debate, the Conference Program has scheduled ample poster sessions and invited lectures from the best experts in the field along with a few of the most talented researchers. Keynote Lectures were given by Professor Roberto Mauri (University of Pisa), Professor Lounés Tadrist (Polytech Marseille) and Professor Maurizio Quadrio (Politecnico di Milano). This special volume collects a selection of the scientific contributions discussed during this conference; these works give a good overview of the state-of-the art Italian research in the field of Heat Transfer related topics. I would like to thank sincerely the authors for presenting their works at the conference and in this special issue. I would also like to extend my thanks to the Scientific Committee and the authors for their accurate review process of each paper for this special issue. Special thanks go to the organizing committee and to our sponsors. As a professor of Politecnico di Milano, let me say I am very proud to have been the chair of this conference in the 150th anniversary of my university. Professor Alfonso Niro Details of organizers, sponsors and committees, as well as further information, are available in the PDF

  2. Multiphysics Simulations: Challenges and Opportunities

    SciTech Connect

    Keyes, David; McInnes, Lois C.; Woodward, Carol; Gropp, William; Myra, Eric; Pernice, Michael; Bell, John; Brown, Jed; Clo, Alain; Connors, Jeffrey; Constantinescu, Emil; Estep, Don; Evans, Kate; Farhat, Charbel; Hakim, Ammar; Hammond, Glenn E.; Hansen, Glen; Hill, Judith; Isaac, Tobin; Jiao, Xiangmin; Jordan, Kirk; Kaushik, Dinesh; Kaxiras, Efthimios; Koniges, Alice; Lee, Ki Hwan; Lott, Aaron; Lu, Qiming; Magerlein, John; Maxwell, Reed M.; McCourt, Michael; Mehl, Miriam; Pawlowski, Roger; Randles, Amanda; Reynolds, Daniel; Riviere, Beatrice; Rude, Ulrich; Scheibe, Timothy D.; Shadid, John; Sheehan, Brendan; Shephard, Mark; Siegel, Andrew; Smith, Barry; Tang, Xianzhu; Wilson, Cian; Wohlmuth, Barbara

    2013-02-12

    We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.

  3. Multiphysics Simulations: Challenges and Opportunities

    SciTech Connect

    Keyes, David E; McInnes, Lois; Woodward, Carol; Evans, Katherine J; Hill, Judith C

    2013-01-01

    We consider multiphysics applications from algorithmic and architectural perspectives, where algorithmic in- cludes both mathematical analysis and computational complexity and architectural includes both software and hard- ware environments. Many diverse multiphysics applications can be reduced, en route to their computational simu- lation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, ex- pose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.

  4. Multiphysics simulations: challenges and opportunities.

    SciTech Connect

    Keyes, D.; McInnes, L. C.; Woodward, C.; Gropp, W.; Myra, E.; Pernice, M.

    2012-11-29

    This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.

  5. Multiphysics analysis of liquid metal annular linear induction pumps: A project overview

    DOE PAGESBeta

    Maidana, Carlos Omar; Nieminen, Juha E.

    2016-03-14

    Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less

  6. A Posteriori Analysis of Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    SciTech Connect

    Donald Estep; Michael Holst; Simon Tavener

    2010-02-08

    This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.

  7. Biofouling and microbial corrosion problem in the thermo-fluid heat exchanger and cooling water system of a nuclear test reactor.

    PubMed

    Rao, T S; Kora, Aruna Jyothi; Chandramohan, P; Panigrahi, B S; Narasimhan, S V

    2009-10-01

    This article discusses aspects of biofouling and corrosion in the thermo-fluid heat exchanger (TFHX) and in the cooling water system of a nuclear test reactor. During inspection, it was observed that >90% of the TFHX tube bundle was clogged with thick fouling deposits. Both X-ray diffraction and Mossbauer analyses of the fouling deposit demonstrated iron corrosion products. The exterior of the tubercle showed the presence of a calcium and magnesium carbonate mixture along with iron oxides. Raman spectroscopy analysis confirmed the presence of calcium carbonate scale in the calcite phase. The interior of the tubercle contained significant iron sulphide, magnetite and iron-oxy-hydroxide. A microbiological assay showed a considerable population of iron oxidizing bacteria and sulphate reducing bacteria (10(5) to 10(6) cfu g(-1) of deposit). As the temperature of the TFHX is in the range of 45-50 degrees C, the microbiota isolated/assayed from the fouling deposit are designated as thermo-tolerant bacteria. The mean corrosion rate of the CS coupons exposed online was approximately 2.0 mpy and the microbial counts of various corrosion causing bacteria were in the range 10(3) to 10(5) cfu ml(-1) in the cooling water and 10(6) to 10(8) cfu ml(-1) in the biofilm. PMID:20183117

  8. Multiphysics Integrated Coupling Environment (MICE) User Manual

    SciTech Connect

    Varija Agarwal; Donna Post Guillen

    2013-08-01

    The complex, multi-part nature of waste glass melters used in nuclear waste vitrification poses significant modeling challenges. The focus of this project has been to couple a 1D MATLAB model of the cold cap region within a melter with a 3D STAR-CCM+ model of the melter itself. The Multiphysics Integrated Coupling Environment (MICE) has been developed to create a cohesive simulation of a waste glass melter that accurately represents the cold cap. The one-dimensional mathematical model of the cold cap uses material properties, axial heat, and mass fluxes to obtain a temperature profile for the cold cap, the region where feed-to-glass conversion occurs. The results from Matlab are used to update simulation data in the three-dimensional STAR-CCM+ model so that the cold cap is appropriately incorporated into the 3D simulation. The two processes are linked through ModelCenter integration software using time steps that are specified for each process. Data is to be exchanged circularly between the two models, as the inputs and outputs of each model depend on the other.

  9. Multiphysics Object Oriented Simulation Environment

    2014-02-12

    The Multiphysics Object Oriented Simulation Environment (MOOSE) software library developed at Idaho National Laboratory is a tool. MOOSE, like other tools, doesn’t actually complete a task. Instead, MOOSE seeks to reduce the effort required to create engineering simulation applications. MOOSE itself is a software library: a blank canvas upon which you write equations and then MOOSE can help you solve them. MOOSE is comparable to a spreadsheet application. A spreadsheet, by itself, doesn’t do anything.more » Only once equations are entered into it will a spreadsheet application compute anything. Such is the same for MOOSE. An engineer or scientist can utilize the equation solvers within MOOSE to solve equations related to their area of study. For instance, a geomechanical scientist can input equations related to water flow in underground reservoirs and MOOSE can solve those equations to give the scientist an idea of how water could move over time. An engineer might input equations related to the forces in steel beams in order to understand the load bearing capacity of a bridge. Because MOOSE is a blank canvas it can be useful in many scientific and engineering pursuits.« less

  10. Multiphysics Object Oriented Simulation Environment

    SciTech Connect

    2014-02-12

    The Multiphysics Object Oriented Simulation Environment (MOOSE) software library developed at Idaho National Laboratory is a tool. MOOSE, like other tools, doesn’t actually complete a task. Instead, MOOSE seeks to reduce the effort required to create engineering simulation applications. MOOSE itself is a software library: a blank canvas upon which you write equations and then MOOSE can help you solve them. MOOSE is comparable to a spreadsheet application. A spreadsheet, by itself, doesn’t do anything. Only once equations are entered into it will a spreadsheet application compute anything. Such is the same for MOOSE. An engineer or scientist can utilize the equation solvers within MOOSE to solve equations related to their area of study. For instance, a geomechanical scientist can input equations related to water flow in underground reservoirs and MOOSE can solve those equations to give the scientist an idea of how water could move over time. An engineer might input equations related to the forces in steel beams in order to understand the load bearing capacity of a bridge. Because MOOSE is a blank canvas it can be useful in many scientific and engineering pursuits.

  11. Multiphysics Applications of ACE3P

    SciTech Connect

    K.H. Lee, C. Ko, Z. Li, C.-K. Ng, L. Xiao, G. Cheng, H. Wang

    2012-07-01

    The TEM3P module of ACE3P, a parallel finite-element electromagnetic code suite from SLAC, focuses on the multiphysics simulation capabilities, including thermal and mechanical analysis for accelerator applications. In this pa- per, thermal analysis of coupler feedthroughs to supercon- ducting rf (SRF) cavities will be presented. For the realistic simulation, internal boundary condition is implemented to capture RF heating effects on the surface shared by a di- electric and a conductor. The multiphysics simulation with TEM3P matched the measurement within 0.4%.

  12. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    SciTech Connect

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.; Poore III, Willis P.; Muhlheim, Michael David

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  13. Multiphysics simulation of corona discharge induced ionic wind

    NASA Astrophysics Data System (ADS)

    Cagnoni, Davide; Agostini, Francesco; Christen, Thomas; Parolini, Nicola; Stevanović, Ivica; de Falco, Carlo

    2013-12-01

    Ionic wind devices or electrostatic fluid accelerators are becoming of increasing interest as tools for thermal management, in particular for semiconductor devices. In this work, we present a numerical model for predicting the performance of such devices; its main benefit is the ability to accurately predict the amount of charge injected from the corona electrode. Our multiphysics numerical model consists of a highly nonlinear, strongly coupled set of partial differential equations including the Navier-Stokes equations for fluid flow, Poisson's equation for electrostatic potential, charge continuity, and heat transfer equations. To solve this system we employ a staggered solution algorithm that generalizes Gummel's algorithm for charge transport in semiconductors. Predictions of our simulations are verified and validated by comparison with experimental measurements of integral physical quantities, which are shown to closely match.

  14. Multiphysics simulation of corona discharge induced ionic wind

    SciTech Connect

    Cagnoni, Davide; Agostini, Francesco; Christen, Thomas; Parolini, Nicola; Stevanović, Ivica; Falco, Carlo de

    2013-12-21

    Ionic wind devices or electrostatic fluid accelerators are becoming of increasing interest as tools for thermal management, in particular for semiconductor devices. In this work, we present a numerical model for predicting the performance of such devices; its main benefit is the ability to accurately predict the amount of charge injected from the corona electrode. Our multiphysics numerical model consists of a highly nonlinear, strongly coupled set of partial differential equations including the Navier-Stokes equations for fluid flow, Poisson's equation for electrostatic potential, charge continuity, and heat transfer equations. To solve this system we employ a staggered solution algorithm that generalizes Gummel's algorithm for charge transport in semiconductors. Predictions of our simulations are verified and validated by comparison with experimental measurements of integral physical quantities, which are shown to closely match.

  15. Structure-coupled multiphysics imaging in geophysical sciences

    NASA Astrophysics Data System (ADS)

    Gallardo, Luis A.; Meju, Max A.

    2011-03-01

    Multiphysics imaging or data inversion is of growing importance in many branches of science and engineering. In geophysical sciences, there is a need for combining information from multiple images acquired using different imaging devices and/or modalities because of the potential for accurate predictions. The major challenges are how to combine disparate data from unrelated physical phenomena, taking into account the different spatial scales of the measurement devices, model complexities, and how to quantify the associated uncertainties. This review paper summarizes the role played by the structural gradients-based approach for coupling fundamentally different physical fields in (mainly) geophysical inversion, develops further understanding of this approach to guide newcomers to the field, and defines the main challenges and directions for future research that may be useful in other fields of science and engineering.

  16. Massive hybrid parallelism for fully implicit multiphysics

    SciTech Connect

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  17. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    SciTech Connect

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  18. Thermo-fluid dynamic design study of single and double-inflow radial and single-stage axial steam turbines for open-cycle thermal energy conversion net power-producing experiment facility in Hawaii

    SciTech Connect

    Schlbeiri, T. . Dept. of Mechanical Engineering)

    1990-03-01

    The results of the study of the optimum thermo-fluid dynamic design concept are presented for turbine units operating within the open-cycle ocean thermal energy conversion (OC-OTEC) systems. The concept is applied to the first OC-OTEC net power producing experiment (NPPE) facility to be installed at Hawaii's natural energy laboratory. Detailed efficiency and performance calculations were performed for the radial turbine design concept with single and double-inflow arrangements. To complete the study, the calculation results for a single-stage axial steam turbine design are also presented. In contrast to the axial flow design with a relatively low unit efficiency, higher efficiency was achieved for single-inflow turbines. Highest efficiency was calculated for a double-inflow radial design, which opens new perspectives for energy generation from OC-OTEC systems.

  19. Invisible Sensors: Simultaneous Sensing and Camouflaging in Multiphysical Fields.

    PubMed

    Yang, Tianzhi; Bai, Xue; Gao, Dongliang; Wu, Linzhi; Li, Baowen; Thong, John T L; Qiu, Cheng-Wei

    2015-12-16

    The first multiphysical invisible sensor is theoretically and experimentally presented. An ultrathin, homogeneous, and isotropic shell is designed to simultaneously manipulate heat flux and DC current and eliminate the multiphysical perturbation, while maintaining the receiving and transmitting properties of the sensor. PMID:26501206

  20. Multiphysics Code Demonstrated for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Melis, Matthew E.

    1998-01-01

    The utility of multidisciplinary analysis tools for aeropropulsion applications is being investigated at the NASA Lewis Research Center. The goal of this project is to apply Spectrum, a multiphysics code developed by Centric Engineering Systems, Inc., to simulate multidisciplinary effects in turbomachinery components. Many engineering problems today involve detailed computer analyses to predict the thermal, aerodynamic, and structural response of a mechanical system as it undergoes service loading. Analysis of aerospace structures generally requires attention in all three disciplinary areas to adequately predict component service behavior, and in many cases, the results from one discipline substantially affect the outcome of the other two. There are numerous computer codes currently available in the engineering community to perform such analyses in each of these disciplines. Many of these codes are developed and used in-house by a given organization, and many are commercially available. However, few, if any, of these codes are designed specifically for multidisciplinary analyses. The Spectrum code has been developed for performing fully coupled fluid, thermal, and structural analyses on a mechanical system with a single simulation that accounts for all simultaneous interactions, thus eliminating the requirement for running a large number of sequential, separate, disciplinary analyses. The Spectrum code has a true multiphysics analysis capability, which improves analysis efficiency as well as accuracy. Centric Engineering, Inc., working with a team of Lewis and AlliedSignal Engines engineers, has been evaluating Spectrum for a variety of propulsion applications including disk quenching, drum cavity flow, aeromechanical simulations, and a centrifugal compressor flow simulation.

  1. High-Fidelity Simulations of Multiphysics Systems

    NASA Astrophysics Data System (ADS)

    Ham, Frank

    2014-11-01

    A pacing theme in the high-fidelity simulations of multi-physics flows is the continual push towards constitutive models that reflect the underlying physics more closely than ever before. At the same time, to impact the design and understanding of real fluidic devices, these models must ultimately be developed in the setting of a highly flexible computational infrastructure capable of both massive parallelism and geometric flexibility. This theme is illustrated using two multi-physics simulations that provide new incite into the behavior of complex fluidic devices. In the first, a novel unstructured Volume-of-Fluid (VoF) method is applied to simulate the liquid fuel atomization processes in a complex high shear nozzle typical of realistic gas turbine injectors. The simulation make aggressive use of directional grid adaptation to support the local resolution of critical instability mechanisms associated with the atomization process. In a companion example, the prediction of flow field and noise in a subsonic jet is linked critically to modeling and resolution of the nozzle boundary layers.

  2. Multidimensional multiphysics simulation of TRISO particle fuel

    NASA Astrophysics Data System (ADS)

    Hales, J. D.; Williamson, R. L.; Novascone, S. R.; Perez, D. M.; Spencer, B. W.; Pastore, G.

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite element nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellent comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. The code's ability to use the same algorithms and models to solve problems of varying dimensionality from 1D through 3D is demonstrated. The code provides rapid solutions of 1D spherically symmetric and 2D axially symmetric models, and its scalable parallel processing capability allows for solutions of large, complex 3D models. Additionally, the flexibility to easily include new physical and material models and straightforward ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.

  3. Multidimensional Multiphysics Simulation of TRISO Particle Fuel

    SciTech Connect

    J. D. Hales; R. L. Williamson; S. R. Novascone; D. M. Perez; B. W. Spencer; G. Pastore

    2013-11-01

    Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical and material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.

  4. Multi-physical Simulation of Laser Welding

    NASA Astrophysics Data System (ADS)

    Vázquez, Rodrigo Gómez; Koch, Holger M.; Otto, Andreas

    Laser welding is a highly demanded technology for manufacturing of body parts in the automotive industry. Application of powerful multi-physical simulation models permits detailed investigation of the laser process avoiding intricate experimental setups and procedures. Features like the degree of power coupling, keyhole evolution or currents inside the melt pool can be analyzed easily. The implementation of complex physical phenomena, like multi-reflection absorption provides insight into process characteristics under selectable conditions and yields essential information concerning the driving mechanisms. The implementation of additional physical models e. g. for diffusion discloses new potential for investigating welding of dissimilar materials. In this paper we present a computational study of laser welding for different conditions. Applied to a real case model predictions show good agreement with experimental results. Initial tests including species diffusion during welding of dissimilar materials are also presented.

  5. Scalable Adaptive Multilevel Solvers for Multiphysics Problems

    SciTech Connect

    Xu, Jinchao

    2014-12-01

    In this project, we investigated adaptive, parallel, and multilevel methods for numerical modeling of various real-world applications, including Magnetohydrodynamics (MHD), complex fluids, Electromagnetism, Navier-Stokes equations, and reservoir simulation. First, we have designed improved mathematical models and numerical discretizaitons for viscoelastic fluids and MHD. Second, we have derived new a posteriori error estimators and extended the applicability of adaptivity to various problems. Third, we have developed multilevel solvers for solving scalar partial differential equations (PDEs) as well as coupled systems of PDEs, especially on unstructured grids. Moreover, we have integrated the study between adaptive method and multilevel methods, and made significant efforts and advances in adaptive multilevel methods of the multi-physics problems.

  6. Multiphysics and Multiscale Analysis for Chemotherapeutic Drug

    PubMed Central

    Zhang, Linan; Kim, Sung Youb; Kim, Dongchoul

    2015-01-01

    This paper presents a three-dimensional dynamic model for the chemotherapy design based on a multiphysics and multiscale approach. The model incorporates cancer cells, matrix degrading enzymes (MDEs) secreted by cancer cells, degrading extracellular matrix (ECM), and chemotherapeutic drug. Multiple mechanisms related to each component possible in chemotherapy are systematically integrated for high reliability of computational analysis of chemotherapy. Moreover, the fidelity of the estimated efficacy of chemotherapy is enhanced by atomic information associated with the diffusion characteristics of chemotherapeutic drug, which is obtained from atomic simulations. With the developed model, the invasion process of cancer cells in chemotherapy treatment is quantitatively investigated. The performed simulations suggest a substantial potential of the presented model for a reliable design technology of chemotherapy treatment. PMID:26491672

  7. COMSOL MULTIPHYSICS MODEL FOR DWPF CANISTER FILLING

    SciTech Connect

    Kesterson, M.

    2011-03-31

    The purpose of this work was to develop a model that can be used to predict temperatures of the glass in the Defense Waste Processing Facility (DWPF) canisters during filling and cooldown. Past attempts to model these processes resulted in large (>200K) differences in predicted temperatures compared to experimentally measured temperatures. This work was therefore intended to also generate a model capable of reproducing the experimentally measured trends of the glass/canister temperature during filling and subsequent cooldown of DWPF canisters. To accomplish this, a simplified model was created using the finite element modeling software COMSOL Multiphysics which accepts user defined constants or expressions to describe material properties. The model results were compared to existing experimental data for validation. A COMSOL Multiphysics model was developed to predict temperatures of the glass within DWPF canisters during filling and cooldown. The model simulations and experimental data were in good agreement. The largest temperature deviations were {approx}40 C for the 87inch thermocouple location at 3000 minutes and during the initial cooldown at the 51 inch location occurring at approximately 600 minutes. Additionally, the model described in this report predicts the general trends in temperatures during filling and cooling observed experimentally. However, the model was developed using parameters designed to fit a single set of experimental data. Therefore, Q-loss is not currently a function of pour rate and pour temperature. Future work utilizing the existing model should include modifying the Q-loss term to be variable based on flow rate and pour temperature. Further enhancements could include eliminating the Q-loss term for a user defined convection where Navier-Stokes does not need to be solved in order to have convection heat transfer.

  8. Multiscale multiphysics and multidomain models—Flexibility and rigidity

    PubMed Central

    Xia, Kelin; Opron, Kristopher; Wei, Guo-Wei

    2013-01-01

    The emerging complexity of large macromolecules has led to challenges in their full scale theoretical description and computer simulation. Multiscale multiphysics and multidomain models have been introduced to reduce the number of degrees of freedom while maintaining modeling accuracy and achieving computational efficiency. A total energy functional is constructed to put energies for polar and nonpolar solvation, chemical potential, fluid flow, molecular mechanics, and elastic dynamics on an equal footing. The variational principle is utilized to derive coupled governing equations for the above mentioned multiphysical descriptions. Among these governing equations is the Poisson-Boltzmann equation which describes continuum electrostatics with atomic charges. The present work introduces the theory of continuum elasticity with atomic rigidity (CEWAR). The essence of CEWAR is to formulate the shear modulus as a continuous function of atomic rigidity. As a result, the dynamics complexity of a macromolecular system is separated from its static complexity so that the more time-consuming dynamics is handled with continuum elasticity theory, while the less time-consuming static analysis is pursued with atomic approaches. We propose a simple method, flexibility-rigidity index (FRI), to analyze macromolecular flexibility and rigidity in atomic detail. The construction of FRI relies on the fundamental assumption that protein functions, such as flexibility, rigidity, and energy, are entirely determined by the structure of the protein and its environment, although the structure is in turn determined by all the interactions. As such, the FRI measures the topological connectivity of protein atoms or residues and characterizes the geometric compactness of the protein structure. As a consequence, the FRI does not resort to the interaction Hamiltonian and bypasses matrix diagonalization, which underpins most other flexibility analysis methods. FRI's computational complexity is of

  9. Multiscale multiphysics and multidomain models—Flexibility and rigidity

    SciTech Connect

    Xia, Kelin; Opron, Kristopher; Wei, Guo-Wei

    2013-11-21

    The emerging complexity of large macromolecules has led to challenges in their full scale theoretical description and computer simulation. Multiscale multiphysics and multidomain models have been introduced to reduce the number of degrees of freedom while maintaining modeling accuracy and achieving computational efficiency. A total energy functional is constructed to put energies for polar and nonpolar solvation, chemical potential, fluid flow, molecular mechanics, and elastic dynamics on an equal footing. The variational principle is utilized to derive coupled governing equations for the above mentioned multiphysical descriptions. Among these governing equations is the Poisson-Boltzmann equation which describes continuum electrostatics with atomic charges. The present work introduces the theory of continuum elasticity with atomic rigidity (CEWAR). The essence of CEWAR is to formulate the shear modulus as a continuous function of atomic rigidity. As a result, the dynamics complexity of a macromolecular system is separated from its static complexity so that the more time-consuming dynamics is handled with continuum elasticity theory, while the less time-consuming static analysis is pursued with atomic approaches. We propose a simple method, flexibility-rigidity index (FRI), to analyze macromolecular flexibility and rigidity in atomic detail. The construction of FRI relies on the fundamental assumption that protein functions, such as flexibility, rigidity, and energy, are entirely determined by the structure of the protein and its environment, although the structure is in turn determined by all the interactions. As such, the FRI measures the topological connectivity of protein atoms or residues and characterizes the geometric compactness of the protein structure. As a consequence, the FRI does not resort to the interaction Hamiltonian and bypasses matrix diagonalization, which underpins most other flexibility analysis methods. FRI's computational complexity is of O

  10. The Role of Multiphysics Simulation in Multidisciplinary Analysis

    NASA Technical Reports Server (NTRS)

    Rifai, Steven M.; Ferencz, Robert M.; Wang, Wen-Ping; Spyropoulos, Evangelos T.; Lawrence, Charles; Melis, Matthew E.

    1998-01-01

    This article describes the applications of the Spectrum(Tm) Solver in Multidisciplinary Analysis (MDA). Spectrum, a multiphysics simulation software based on the finite element method, addresses compressible and incompressible fluid flow, structural, and thermal modeling as well as the interaction between these disciplines. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena. Interaction constraints are enforced in a fully-coupled manner using the augmented-Lagrangian method. Within the multiphysics framework, the finite element treatment of fluids is based on Galerkin-Least-Squares (GLS) method with discontinuity capturing operators. The arbitrary-Lagrangian-Eulerian method is utilized to account for deformable fluid domains. The finite element treatment of solids and structures is based on the Hu-Washizu variational principle. The multiphysics architecture lends itself naturally to high-performance parallel computing. Aeroelastic, propulsion, thermal management and manufacturing applications are presented.

  11. Paralel Multiphysics Algorithms and Software for Computational Nuclear Engineering

    SciTech Connect

    D. Gaston; G. Hansen; S. Kadioglu; D. A. Knoll; C. Newman; H. Park; C. Permann; W. Taitano

    2009-08-01

    There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple 'code coupling' or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.

  12. Parallel multiphysics algorithms and software for computational nuclear engineering

    NASA Astrophysics Data System (ADS)

    Gaston, D.; Hansen, G.; Kadioglu, S.; Knoll, D. A.; Newman, C.; Park, H.; Permann, C.; Taitano, W.

    2009-07-01

    There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple code coupling or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE-based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.

  13. Multiphysics Simulation of Active Hypersonic Lip Cooling

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Wang, Wen-Ping

    1999-01-01

    This article describes the application of the Multidisciplinary Analysis (MDA) solver, Spectrum, in analyzing a hydrogen-cooled hypersonic cowl leading-edge structure. Spectrum, a multiphysics simulation code based on the finite element method, addresses compressible and incompressible fluid flow, structural, and thermal modeling, as well as the interactions between these disciplines. Fluid-solid-thermal interactions in a hydrogen impingement-cooled leading edge are predicted using Spectrum. Two- and semi-three-dimensional models are considered for a leading edge impingement coolant, concept under either specified external heat flux or aerothermodynamic heating from a Mach 5 external flow interaction. The solution accuracy is demonstrated from mesh refinement analysis. With active cooling, the leading edge surface temperature is drastically reduced from 1807 K of the adiabatic condition to 418 K. The internal coolant temperature profile exhibits a sharp gradient near channel/solid interface. Results from two different cooling channel configurations are also presented to illustrate the different behavior of alternative active cooling schemes.

  14. A novel phenomenological multi-physics model of Li-ion battery cells

    NASA Astrophysics Data System (ADS)

    Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.

    2016-09-01

    A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.

  15. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    SciTech Connect

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  16. Optimization of coupled multiphysics methodology for safety analysis of pebble bed modular reactor

    NASA Astrophysics Data System (ADS)

    Mkhabela, Peter Tshepo

    The research conducted within the framework of this PhD thesis is devoted to the high-fidelity multi-physics (based on neutronics/thermal-hydraulics coupling) analysis of Pebble Bed Modular Reactor (PBMR), which is a High Temperature Reactor (HTR). The Next Generation Nuclear Plant (NGNP) will be a HTR design. The core design and safety analysis methods are considerably less developed and mature for HTR analysis than those currently used for Light Water Reactors (LWRs). Compared to LWRs, the HTR transient analysis is more demanding since it requires proper treatment of both slower and much longer transients (of time scale in hours and days) and fast and short transients (of time scale in minutes and seconds). There is limited operation and experimental data available for HTRs for validation of coupled multi-physics methodologies. This PhD work developed and verified reliable high fidelity coupled multi-physics models subsequently implemented in robust, efficient, and accurate computational tools to analyse the neutronics and thermal-hydraulic behaviour for design optimization and safety evaluation of PBMR concept The study provided a contribution to a greater accuracy of neutronics calculations by including the feedback from thermal hydraulics driven temperature calculation and various multi-physics effects that can influence it. Consideration of the feedback due to the influence of leakage was taken into account by development and implementation of improved buckling feedback models. Modifications were made in the calculation procedure to ensure that the xenon depletion models were accurate for proper interpolation from cross section tables. To achieve this, the NEM/THERMIX coupled code system was developed to create the system that is efficient and stable over the duration of transient calculations that last over several tens of hours. Another achievement of the PhD thesis was development and demonstration of full-physics, three-dimensional safety analysis

  17. COMPUTATIONAL CHALLENGES IN BUILDING MULTI-SCALE AND MULTI-PHYSICS MODELS OF CARDIAC ELECTRO-MECHANICS

    PubMed Central

    Plank, G; Prassl, AJ; Augustin, C

    2014-01-01

    Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050

  18. Tightly Coupled Multiphysics Algorithm for Pebble Bed Reactors

    SciTech Connect

    HyeongKae Park; Dana Knoll; Derek Gaston; Richard Martineau

    2010-10-01

    We have developed a tightly coupled multiphysics simulation tool for the pebble-bed reactor (PBR) concept, a type of Very High-Temperature gas-cooled Reactor (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation Environment library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly with a Newton-based approach. Expensive Jacobian matrix formation is alleviated via the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to minimize Krylov iterations. Motivation for the work is provided via analysis and numerical experiments on simpler multiphysics reactor models. We then provide detail of the physical models and numerical methods in PRONGHORN. Finally, PRONGHORN's algorithmic capability is demonstrated on a number of PBR test cases.

  19. FEM and Multiphysics Applications at NASA/GSFC

    NASA Technical Reports Server (NTRS)

    Loughlin, James

    2004-01-01

    FEM software available to the Mechanical Systems Analysis and Simulation Branch at Goddard Space Flight Center (GSFC) include: 1) MSC/Nastran; 2) Abaqus; 3) Ansys/Multiphysics; 4) COSMOS/M; 5) 'Home-grown' programs; 6) Pre/post processors such as Patran and FEMAP. This viewgraph presentation provides additional information on MSC/Nastran and Ansys/Multiphysics, and includes screen shots of analyzed equipment, including the Wilkinson Microwave Anistropy Probe, a micro-mirror, a MEMS tunable filter, and a micro-shutter array. The presentation also includes information on the verification of results.

  20. Final Report: Quantifying Prediction Fidelity in Multiscale Multiphysics Simulations

    SciTech Connect

    Long, Kevin

    2014-09-30

    We have developed algorithms and software in support of uncertainty quantification in nonlinear multiphysics simulations. This work includes high-level, high-performance software for large-scale, matrix-free linear algebra and a new algorithm for fast computation of transcendental functions of stochastic variables.

  1. A theory manual for multi-physics code coupling in LIME.

    SciTech Connect

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  2. Solid Rocket Motor Combustion Instability Modeling in COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Fischbach, Sean R.

    2015-01-01

    Combustion instability modeling of Solid Rocket Motors (SRM) remains a topic of active research. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process, acoustics, and steady-state gas dynamics. Recent advances in defining the energy transport of disturbances within steady flow-fields have been applied by combustion stability modelers to improve the analysis framework [1, 2, 3]. Employing this more accurate global energy balance requires a higher fidelity model of the SRM flow-field and acoustic mode shapes. The current industry standard analysis tool utilizes a one dimensional analysis of the time dependent fluid dynamics along with a quasi-three dimensional propellant grain regression model to determine the SRM ballistics. The code then couples with another application that calculates the eigenvalues of the one dimensional homogenous wave equation. The mean flow parameters and acoustic normal modes are coupled to evaluate the stability theory developed and popularized by Culick [4, 5]. The assumption of a linear, non-dissipative wave in a quiescent fluid remains valid while acoustic amplitudes are small and local gas velocities stay below Mach 0.2. The current study employs the COMSOL multiphysics finite element framework to model the steady flow-field parameters and acoustic normal modes of a generic SRM. The study requires one way coupling of the CFD High Mach Number Flow (HMNF) and mathematics module. The HMNF module evaluates the gas flow inside of a SRM using St. Robert's law to model the solid propellant burn rate, no slip boundary conditions, and the hybrid outflow condition. Results from the HMNF model are verified by comparing the pertinent ballistics parameters with the industry standard code outputs (i.e. pressure drop, thrust, ect.). These results are then used by the coefficient form of the mathematics module to determine the complex eigenvalues of the

  3. Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform

    SciTech Connect

    Primm, Trent; Ruggles, Arthur; Freels, James D

    2009-03-01

    A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluid model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.

  4. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    SciTech Connect

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  5. Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.

  6. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  7. Recent developments in multiphysics computational models of physiological flows

    NASA Astrophysics Data System (ADS)

    Eldredge, Jeff D.; Mittal, Rajat

    2016-04-01

    A mini-symposium on computational modeling of fluid-structure interactions and other multiphysics in physiological flows was held at the 11th World Congress on Computational Mechanics in July 2014 in Barcelona, Spain. This special issue of Theoretical and Computational Fluid Dynamics contains papers from among the participants of the mini-symposium. The present paper provides an overview of the mini-symposium and the special issue.

  8. COMSOL MULTIPHYSICS MODEL FOR DWPF CANISTER FILLING, REVISION 1

    SciTech Connect

    Kesterson, M.

    2011-09-08

    This revision is an extension of the COMSOL Multiphysics model previously developed and documented to simulate the temperatures of the glass during pouring a Defense Waste Processing Facility (DWPF) canister. In that report the COMSOL Multiphysics model used a lumped heat loss term derived from experimental thermocouple data based on a nominal pour rate of 228 lbs./hr. As such, the model developed using the lumped heat loss term had limited application without additional experimental data. Therefore, the COMSOL Multiphysics model was modified to simulate glass pouring and subsequent heat input which, replaced the heat loss term in the initial model. This new model allowed for changes in flow geometry based on pour rate as well as the ability to increase and decrease flow and stop and restart flow to simulate varying process conditions. A revised COMSOL Multiphysics model was developed to predict temperatures of the glass within DWPF canisters during filling and cooldown. The model simulations and experimental data were in good agreement. The largest temperature deviations were {approx} 40 C for the 87 inch thermocouple location at 3000 minutes and during the initial cool down at the 51 inch location occurring at approximately 600 minutes. Additionally, the model described in this report predicts the general temperature trends during filling and cooling as observed experimentally. The revised model incorporates a heat flow region corresponding to the glass pouring down the centerline of the canister. The geometry of this region is dependent on the flow rate of the glass and can therefore be used to see temperature variations for various pour rates. The equations used for this model were developed by comparing simulation output to experimental data from a single pour rate. Use of the model will predict temperature profiles for other pour rates but the accuracy of the simulations is unknown due to only a single flow rate comparison.

  9. A MULTIDIMENSIONAL AND MULTIPHYSICS APPROACH TO NUCLEAR FUEL BEHAVIOR SIMULATION

    SciTech Connect

    R. L. Williamson; J. D. Hales; S. R. Novascone; M. R. Tonks; D. R. Gaston; C. J. Permann; D. Andrs; R. C. Martineau

    2012-04-01

    Important aspects of fuel rod behavior, for example pellet-clad mechanical interaction (PCMI), fuel fracture, oxide formation, non-axisymmetric cooling, and response to fuel manufacturing defects, are inherently multidimensional in addition to being complicated multiphysics problems. Many current modeling tools are strictly 2D axisymmetric or even 1.5D. This paper outlines the capabilities of a new fuel modeling tool able to analyze either 2D axisymmetric or fully 3D models. These capabilities include temperature-dependent thermal conductivity of fuel; swelling and densification; fuel creep; pellet fracture; fission gas release; cladding creep; irradiation growth; and gap mechanics (contact and gap heat transfer). The need for multiphysics, multidimensional modeling is then demonstrated through a discussion of results for a set of example problems. The first, a 10-pellet rodlet, demonstrates the viability of the solution method employed. This example highlights the effect of our smeared cracking model and also shows the multidimensional nature of discrete fuel pellet modeling. The second example relies on our the multidimensional, multiphysics approach to analyze a missing pellet surface problem. As a final example, we show a lower-length-scale simulation coupled to a continuum-scale simulation.

  10. Coupling Schemes for Multiphysics Reactor Simulation

    SciTech Connect

    Vijay Mahadeven; Jean Ragusa

    2007-11-01

    This report documents the progress of the student Vijay S. Mahadevan from the Nuclear Engineering Department of Texas A&M University over the summer of 2007 during his visit to the INL. The purpose of his visit was to investigate the physics-based preconditioned Jacobian-free Newton-Krylov method applied to physics relevant to nuclear reactor simulation. To this end he studied two test problems that represented reaction-diffusion and advection-reaction. These two test problems will provide the basis for future work in which neutron diffusion, nonlinear heat conduction, and a twophase flow model will be tightly coupled to provide an accurate model of a BWR core.

  11. Mathematical and algorithmic issues in multiphysics coupling.

    SciTech Connect

    Gai, Xiuli; Stone, Charles Michael; Wheeler, Mary Fanett

    2004-06-01

    The modeling of fluid/structure interaction is of growing importance in both energy and environmental applications. Because of the inherent complexity, these problems must be simulated on parallel machines in order to achieve high resolution. The purpose of this research was to investigate techniques for coupling flow and geomechanics in porous media that are suitable for parallel computation. In particular, our main objective was to develop an iterative technique which can be as accurate as a fully coupled model but which allows for robust and efficient coupling of existing complex models (software). A parallel linear elastic module was developed which was coupled to a three phase three-component black oil model in IPARS (Integrated Parallel Accurate Reservoir Simulator). An iterative de-coupling technique was introduced at each time step. The resulting nonlinear iteration involved solving for displacements and flow sequentially. Rock compressibility was used in the flow model to account for the effect of deformation on the pore volume. Convergence was achieved when the mass balance for each component satisfied a given tolerance. This approach was validated by comparison with a fully coupled approach implemented in the British PetroledAmoco ACRES simulator. Another objective of this work was to develop an efficient parallel solver for the elasticity equations. A preconditioned conjugate gradient solver was implemented to solve the algebraic system arising from tensor product linear Galerkin approximations for the displacements. Three preconditioners were developed: LSOR (line successive over-relaxation), block Jacobi, and agglomeration multi-grid. The latter approach involved coarsening the 3D system to 2D and using LSOR as a smoother that is followed by applying geometric multi-grid with SOR (successive over-relaxation) as a smoother. Preliminary tests on a 64-node Beowulf cluster at CSM indicate that the agglomeration multi-grid approach is robust and efficient.

  12. Lithium-Ion Battery Safety Study Using Multi-Physics Internal Short-Circuit Model (Presentation)

    SciTech Connect

    Kim, G-.H.; Smith, K.; Pesaran, A.

    2009-06-01

    This presentation outlines NREL's multi-physics simulation study to characterize an internal short by linking and integrating electrochemical cell, electro-thermal, and abuse reaction kinetics models.

  13. Solid Rocket Motor Combustion Instability Modeling in COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Fischbach, S. R.

    2015-01-01

    Combustion instability modeling of Solid Rocket Motors (SRM) remains a topic of active research. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process, acoustics, and steady-state gas dynamics. Recent advances in defining the energy transport of disturbances within steady flow-fields have been applied by combustion stability modelers to improve the analysis framework. Employing this more accurate global energy balance requires a higher fidelity model of the SRM flow-field and acoustic mode shapes. The current industry standard analysis tool utilizes a one dimensional analysis of the time dependent fluid dynamics along with a quasi-three dimensional propellant grain regression model to determine the SRM ballistics. The code then couples with another application that calculates the eigenvalues of the one dimensional homogenous wave equation. The mean flow parameters and acoustic normal modes are coupled to evaluate the stability theory developed and popularized by Culick. The assumption of a linear, non-dissipative wave in a quiescent fluid remains valid while acoustic amplitudes are small and local gas velocities stay below Mach 0.2. The current study employs the COMSOL Multiphysics finite element framework to model the steady flow-field parameters and acoustic normal modes of a generic SRM. This work builds upon previous efforts to verify the use of the acoustic velocity potential equation (AVPE) laid out by Campos. The acoustic velocity potential (psi) describing the acoustic wave motion in the presence of an inhomogeneous steady high-speed flow is defined by, del squared psi - (lambda/c) squared psi - M x [M x del((del)(psi))] - 2((lambda)(M)/c + M x del(M) x (del)(psi) - 2(lambda)(psi)[M x del(1/c)] = 0. with M as the Mach vector, c as the speed of sound, and ? as the complex eigenvalue. The study requires one way coupling of the CFD High Mach Number Flow (HMNF

  14. Multiphysics numerical models of resurgent calderas ground deformation: The 1982-2010 Campi Flegrei (Southern Italy) case studies

    NASA Astrophysics Data System (ADS)

    Tizzani, Pietro

    2013-04-01

    Ground deformation signals in caldera region are the expression of near-surface and/or deep-seated physical processes. As most of the geophysical analysis, the interpretation of the deformation data is usually performed setting up inverse problems, which often use Monte Carlo optimization techniques like the Simulated Annealing and the Genetic Algorithm, in order to constrain the nature of the causative sources at depth. Usually, these methods exploit the problem's solution space by iterating forward analytical models, which consider simplified geometries and homogeneous linear elastic material properties. However, several recent studies have shown that oversimplified forward models may lead to misinterpretations of the retrieved source parameters. To overcome these limitations we consider the Finite Element (FE) method as a powerful numerical tool that allows implementing models with complex geometries, material heterogeneities, as well as time dependent physical processes. For this reason, FE models are a suitable candidate to fill the gap between the accuracy achieved on the observation of ground deformation in volcanic areas and the models used for its interpretation. In this context, we investigate the driving forces responsible of the long-term ground deformation of the Campi Flegrei (CF) caldera, Southern Italy, during the 1982-2010 time interval. To this purpose, we propose a new multiphysics numerical model that takes into account both the mechanical heterogeneities of the crust and the thermal conditions of geothermal system beneath the volcano. We perform a numerical Chain Rule Optimization Procedure (CROP) in a FEM environment, that considers different physical contexts linked along a common evolution line: starting from the thermal proprieties and mechanical heterogeneities of the upper crust, we develop a 3D time dependent thermo-fluid dynamic model of CF caldera. More specifically, by carrying out two subsequent optimization procedures based on

  15. Advanced multiphysics coupling for LWR fuel performance analysis

    SciTech Connect

    Hales, J. D.; Tonks, M. R.; Gleicher, F. N.; Spencer, B. W.; Novascone, S. R.; Williamson, R. L.; Pastore, G.; Perez, D. M.

    2015-10-01

    Even the most basic nuclear fuel analysis is a multiphysics undertaking, as a credible simulation must consider at a minimum coupled heat conduction and mechanical deformation. The need for more realistic fuel modeling under a variety of conditions invariably leads to a desire to include coupling between a more complete set of the physical phenomena influencing fuel behavior, including neutronics, thermal hydraulics, and mechanisms occurring at lower length scales. This paper covers current efforts toward coupled multiphysics LWR fuel modeling in three main areas. The first area covered in this paper concerns thermomechanical coupling. The interaction of these two physics, particularly related to the feedback effect associated with heat transfer and mechanical contact at the fuel/clad gap, provides numerous computational challenges. An outline is provided of an effective approach used to manage the nonlinearities associated with an evolving gap in BISON, a nuclear fuel performance application. A second type of multiphysics coupling described here is that of coupling neutronics with thermomechanical LWR fuel performance. DeCART, a high-fidelity core analysis program based on the method of characteristics, has been coupled to BISON. DeCART provides sub-pin level resolution of the multigroup neutron flux, with resonance treatment, during a depletion or a fast transient simulation. Two-way coupling between these codes was achieved by mapping fission rate density and fast neutron flux fields from DeCART to BISON and the temperature field from BISON to DeCART while employing a Picard iterative algorithm. Finally, the need for multiscale coupling is considered. Fission gas production and evolution significantly impact fuel performance by causing swelling, a reduction in the thermal conductivity, and fission gas release. The mechanisms involved occur at the atomistic and grain scale and are therefore not the domain of a fuel performance code. However, it is possible to use

  16. Multi-Physics Analysis of the Fermilab Booster RF Cavity

    SciTech Connect

    Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.; /Fermilab

    2012-05-14

    After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.

  17. Advanced multiphysics coupling for LWR fuel performance analysis

    DOE PAGESBeta

    Hales, J. D.; Tonks, M. R.; Gleicher, F. N.; Spencer, B. W.; Novascone, S. R.; Williamson, R. L.; Pastore, G.; Perez, D. M.

    2015-10-01

    Even the most basic nuclear fuel analysis is a multiphysics undertaking, as a credible simulation must consider at a minimum coupled heat conduction and mechanical deformation. The need for more realistic fuel modeling under a variety of conditions invariably leads to a desire to include coupling between a more complete set of the physical phenomena influencing fuel behavior, including neutronics, thermal hydraulics, and mechanisms occurring at lower length scales. This paper covers current efforts toward coupled multiphysics LWR fuel modeling in three main areas. The first area covered in this paper concerns thermomechanical coupling. The interaction of these two physics,more » particularly related to the feedback effect associated with heat transfer and mechanical contact at the fuel/clad gap, provides numerous computational challenges. An outline is provided of an effective approach used to manage the nonlinearities associated with an evolving gap in BISON, a nuclear fuel performance application. A second type of multiphysics coupling described here is that of coupling neutronics with thermomechanical LWR fuel performance. DeCART, a high-fidelity core analysis program based on the method of characteristics, has been coupled to BISON. DeCART provides sub-pin level resolution of the multigroup neutron flux, with resonance treatment, during a depletion or a fast transient simulation. Two-way coupling between these codes was achieved by mapping fission rate density and fast neutron flux fields from DeCART to BISON and the temperature field from BISON to DeCART while employing a Picard iterative algorithm. Finally, the need for multiscale coupling is considered. Fission gas production and evolution significantly impact fuel performance by causing swelling, a reduction in the thermal conductivity, and fission gas release. The mechanisms involved occur at the atomistic and grain scale and are therefore not the domain of a fuel performance code. However, it is

  18. High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering

    SciTech Connect

    Solin, Pavel; Ragusa, Jean

    2014-03-09

    We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.

  19. A multiphysics and multiscale software environment for modeling astrophysical systems

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; McMillan, Steve; Harfst, Stefan; Groen, Derek; Fujii, Michiko; Nualláin, Breanndán Ó.; Glebbeek, Evert; Heggie, Douglas; Lombardi, James; Hut, Piet; Angelou, Vangelis; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Gaburov, Evghenii; Izzard, Rob; Jurić, Mario; Justham, Stephen; Sottoriva, Andrea; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel

    2009-05-01

    We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a "Noah's Ark" milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multiscale and multiphysics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe three examples calculated using MUSE: the merger of two galaxies, the merger of two evolving stars, and a hybrid N-body simulation. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.

  20. Multi-physics computational grains (MPCGs) for direct numerical simulation (DNS) of piezoelectric composite/porous materials and structures

    NASA Astrophysics Data System (ADS)

    Bishay, Peter L.; Dong, Leiting; Atluri, Satya N.

    2014-11-01

    Conceptually simple and computationally most efficient polygonal computational grains with voids/inclusions are proposed for the direct numerical simulation of the micromechanics of piezoelectric composite/porous materials with non-symmetrical arrangement of voids/inclusions. These are named "Multi-Physics Computational Grains" (MPCGs) because each "mathematical grain" is geometrically similar to the irregular shapes of the physical grains of the material in the micro-scale. So each MPCG element represents a grain of the matrix of the composite and can include a pore or an inclusion. MPCG is based on assuming independent displacements and electric-potentials in each cell. The trial solutions in each MPCG do not need to satisfy the governing differential equations, however, they are still complete, and can efficiently model concentration of electric and mechanical fields. MPCG can be used to model any generally anisotropic material as well as nonlinear problems. The essential idea can also be easily applied to accurately solve other multi-physical problems, such as complex thermal-electro-magnetic-mechanical materials modeling. Several examples are presented to show the capabilities of the proposed MPCGs and their accuracy.

  1. Fracture Characterization through Multi-Physics Joint Inversion

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Edmiston, J. K.; Zhang, Y.

    2014-12-01

    Natural and man-made fractures tend to significantly impact the behavior of a subsurface system - with both desirable and undesirable consequences. Thus, the description, characterization, and prediction of fractured systems requires careful conceptualization and a defensible modeling approach that is tailored to the objectives of a specific application. We review some of these approaches and the related data needs, and discuss the use of multi-physics joint inversion techniques to identify and characterize the relevant features of the fracture system. In particular, we demonstrate the potential use of a non-isothermal, multiphase flow simulator coupled to a thermo-poro-elastic model for the calculation of observable deformations during injection-production operations. This model is integrated into a joint inversion framework for the estimation of geometrical, hydrogeological, rockmechanical, thermal, and statistical parameters representing the fractured porous medium.

  2. Multiscale Multiphysics Developments for Accident Tolerant Fuel Concepts

    SciTech Connect

    Gamble, K. A.; Hales, J. D.; Yu, J.; Zhang, Y.; Bai, X.; Andersson, D.; Patra, A.; Wen, W.; Tome, C.; Baskes, M.; Martinez, E.; Stanek, C. R.; Miao, Y.; Ye, B.; Hofman, G. L.; Yacout, A. M.; Liu, W.

    2015-09-01

    U3Si2 and iron-chromium-aluminum (Fe-Cr-Al) alloys are two of many proposed accident-tolerant fuel concepts for the fuel and cladding, respectively. The behavior of these materials under normal operating and accident reactor conditions is not well known. As part of the Department of Energy’s Accident Tolerant Fuel High Impact Problem program significant work has been conducted to investigate the U3Si2 and FeCrAl behavior under reactor conditions. This report presents the multiscale and multiphysics effort completed in fiscal year 2015. The report is split into four major categories including Density Functional Theory Developments, Molecular Dynamics Developments, Mesoscale Developments, and Engineering Scale Developments. The work shown here is a compilation of a collaborative effort between Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory and Anatech Corp.

  3. Solid Oxide Fuel Cell - Multi-Physics and GUI

    SciTech Connect

    2013-10-10

    SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition in tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.

  4. Actuating the deformable mirror: a multiphysics design approach

    NASA Astrophysics Data System (ADS)

    Del Vecchio, Ciro; Biasi, Roberto; Gallieni, Daniele; Riccardi, Armando; Spairani, Roberto

    2008-07-01

    The crucial component of an Adaptive Optics unit is the actuation system of the deformable mirror. One possible implementation comprehends a linear force motor and a capacitive sensor providing the feedback measure signal. Due to the extreme accuracy required by the optics, a proper design of the actuator is essential in order to fulfill the specifications. In the device, mechanics, electrostatics, electromagnetism and thermal effects are mutually related, and they have to be properly considered in the design phase. This paper analyzes such a multiphysics behavior of the actuation system, providing an inter-disciplinary approach able to define the optimized device: a capacitive sensor measuring the displacements at the nanometer accuracy and a closed loop linear motor delivering the requested force with the lowest possible power dissipation, in order to minimize the degrading of the optical waves propagation.

  5. Multiphysics modeling of the steel continuous casting process

    NASA Astrophysics Data System (ADS)

    Hibbeler, Lance C.

    This work develops a macroscale, multiphysics model of the continuous casting of steel. The complete model accounts for the turbulent flow and nonuniform distribution of superheat in the molten steel, the elastic-viscoplastic thermal shrinkage of the solidifying shell, the heat transfer through the shell-mold interface with variable gap size, and the thermal distortion of the mold. These models are coupled together with carefully constructed boundary conditions with the aid of reduced-order models into a single tool to investigate behavior in the mold region, for practical applications such as predicting ideal tapers for a beam-blank mold. The thermal and mechanical behaviors of the mold are explored as part of the overall modeling effort, for funnel molds and for beam-blank molds. These models include high geometric detail and reveal temperature variations on the mold-shell interface that may be responsible for cracks in the shell. Specifically, the funnel mold has a column of mold bolts in the middle of the inside-curve region of the funnel that disturbs the uniformity of the hot face temperatures, which combined with the bending effect of the mold on the shell, can lead to longitudinal facial cracks. The shoulder region of the beam-blank mold shows a local hot spot that can be reduced with additional cooling in this region. The distorted shape of the funnel mold narrow face is validated with recent inclinometer measurements from an operating caster. The calculated hot face temperatures and distorted shapes of the mold are transferred into the multiphysics model of the solidifying shell. The boundary conditions for the first iteration of the multiphysics model come from reduced-order models of the process; one such model is derived in this work for mold heat transfer. The reduced-order model relies on the physics of the solution to the one-dimensional heat-conduction equation to maintain the relationships between inputs and outputs of the model. The geometric

  6. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  7. Reliability-based design optimization of multiphysics, aerospace systems

    NASA Astrophysics Data System (ADS)

    Allen, Matthew R.

    Aerospace systems are inherently plagued by uncertainties in their design, fabrication, and operation. Safety factors and expensive testing at the prototype level traditionally account for these uncertainties. Reliability-based design optimization (RBDO) can drastically decrease life-cycle development costs by accounting for the stochastic nature of the system response in the design process. The reduction in cost is amplified for conceptually new designs, for which no accepted safety factors currently exist. Aerospace systems often operate in environments dominated by multiphysics phenomena, such as the fluid-structure interaction of aeroelastic wings or the electrostatic-mechanical interaction of sensors and actuators. The analysis of such phenomena is generally complex and computationally expensive, and therefore is usually simplified or approximated in the design process. However, this leads to significant epistemic uncertainties in modeling, which may dominate the uncertainties for which the reliability analysis was intended. Therefore, the goal of this thesis is to present a RBDO framework that utilizes high-fidelity simulation techniques to minimize the modeling error for multiphysics phenomena. A key component of the framework is an extended reduced order modeling (EROM) technique that can analyze various states in the design or uncertainty parameter space at a reduced computational cost, while retaining characteristics of high-fidelity methods. The computational framework is verified and applied to the RBDO of aeroelastic systems and electrostatically driven sensors and actuators, utilizing steady-state analysis and design criteria. The framework is also applied to the design of electrostatic devices with transient criteria, which requires the use of the EROM technique to overcome the computational burden of multiple transient analyses.

  8. Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration

    2016-03-01

    Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.

  9. Assessment of PCMI Simulation Using the Multidimensional Multiphysics BISON Fuel Performance Code

    SciTech Connect

    Stephen R. Novascone; Jason D. Hales; Benjamin W. Spencer; Richard L. Williamson

    2012-09-01

    irradiation level, while the power at the top of the rod is at about 20% of the base irradiation power level. 2D BISON simulations of the Bump Test GE7 were run using both discrete and smeared pellet geometry. Comparisons between these calculations and experimental measurements are presented for clad diameter and elongation after the base irradiation and clad profile along the length of the test section after the bump test. Preliminary comparisons between calculations and measurements are favorable, supporting the use of BISON as an accurate multiphysics fuel simulation tool.

  10. An introduction to LIME 1.0 and its use in coupling codes for multiphysics simulations.

    SciTech Connect

    Belcourt, Noel; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-11-01

    LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.

  11. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  12. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  13. Multiscale Multiphysics and Multidomain Models I: Basic Theory

    PubMed Central

    Wei, Guo-Wei

    2013-01-01

    This work extends our earlier two-domain formulation of a differential geometry based multiscale paradigm into a multidomain theory, which endows us the ability to simultaneously accommodate multiphysical descriptions of aqueous chemical, physical and biological systems, such as fuel cells, solar cells, nanofluidics, ion channels, viruses, RNA polymerases, molecular motors and large macromolecular complexes. The essential idea is to make use of the differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain of solvent from the microscopic domain of solute, and dynamically couple continuum and discrete descriptions. Our main strategy is to construct energy functionals to put on an equal footing of multiphysics, including polar (i.e., electrostatic) solvation, nonpolar solvation, chemical potential, quantum mechanics, fluid mechanics, molecular mechanics, coarse grained dynamics and elastic dynamics. The variational principle is applied to the energy functionals to derive desirable governing equations, such as multidomain Laplace-Beltrami (LB) equations for macromolecular morphologies, multidomain Poisson-Boltzmann (PB) equation or Poisson equation for electrostatic potential, generalized Nernst-Planck (NP) equations for the dynamics of charged solvent species, generalized Navier-Stokes (NS) equation for fluid dynamics, generalized Newton's equations for molecular dynamics (MD) or coarse-grained dynamics and equation of motion for elastic dynamics. Unlike the classical PB equation, our PB equation is an integral-differential equation due to solvent-solute interactions. To illustrate the proposed formalism, we have explicitly constructed three models, a multidomain solvation model, a multidomain charge transport model and a multidomain chemo-electro-fluid-MD-elastic model. Each solute domain is equipped with distinct surface tension, pressure, dielectric function, and charge density distribution. In addition to long

  14. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; McMillan, Steve; O'Nualláin, Breanndán; Heggie, Douglas; Lombardi, James; Hut, Piet; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Fuji, Michiko; Gaburov, Evghenii; Glebbeek, Evert; Groen, Derek; Harfst, Stefan; Izzard, Rob; Jurić, Mario; Justham, Stephen; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel

    We present MUSE, a software framework for tying together existing computational tools for different astrophysical domains into a single multiphysics, multiscale workload. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for a generalized stellar systems workload. MUSE has now reached a "Noah's Ark" milestone, with two available numerical solvers for each domain. MUSE can treat small stellar associations, galaxies and everything in between, including planetary systems, dense stellar clusters and galactic nuclei. Here we demonstrate an examples calculated with MUSE: the merger of two galaxies. In addition we demonstrate the working of MUSE on a distributed computer. The current MUSE code base is publicly available as open source at http://muse.li.

  15. Multiphysics methods development for high temperature gas reactor analysis

    NASA Astrophysics Data System (ADS)

    Seker, Volkan

    Multiphysics computational methods were developed to perform design and safety analysis of the next generation Pebble Bed High Temperature Gas Cooled Reactors. A suite of code modules was developed to solve the coupled thermal-hydraulics and neutronics field equations. The thermal-hydraulics module is based on the three dimensional solution of the mass, momentum and energy equations in cylindrical coordinates within the framework of the porous media method. The neutronics module is a part of the PARCS (Purdue Advanced Reactor Core Simulator) code and provides a fine mesh finite difference solution of the neutron diffusion equation in three dimensional cylindrical coordinates. Coupling of the two modules was performed by mapping the solution variables from one module to the other. Mapping is performed automatically in the code system by the use of a common material mesh in both modules. The standalone validation of the thermal-hydraulics module was performed with several cases of the SANA experiment and the standalone thermal-hydraulics exercise of the PBMR-400 benchmark problem. The standalone neutronics module was validated by performing the relevant exercises of the PBMR-268 and PBMR-400 benchmark problems. Additionally, the validation of the coupled code system was performed by analyzing several steady state and transient cases of the OECD/NEA PBMR-400 benchmark problem.

  16. Multiphysics/Multiscale Coupling of Microturbulence and MHD Equiliria

    NASA Astrophysics Data System (ADS)

    Lee, W. W.; Startsev, E. A.; Hudson, S. R.; Wang, W. X.; Ethier, S.

    2015-11-01

    We propose to investigate the multiphysics and multiscale coupling between a time-dependent gyrokinetic ``microscopic'' code for studying gyroradius-scale turbulence, associated with global ion-acoustic and shear-Alfven waves, and a ``macroscopic'' code for computing large-scale global equilibria based on the time-independent MHD equations, in order to identify a family of self-consistent global MHD equilibria that can minimize the electrostatic potentials responsible for turbulent transport by passing global parameters between the two codes. The codes involved are 1) the electromagnetic version of the GTS code for studying microturbulence, and 2) the SPEC code for calculating three-dimensional MHD equilibria with or without chaotic fields. This concept is based on a newly found correlation between the gyrokinetic evolution and the MHD equilibrium when the electrostatic potential vanishes. The proposed work involves the scales ranging from the electron skin depth to the machine size, and includes the physics of both gyrokinetics and MHD. This work is supported by US DoE # DE-AC02-09CH11466.

  17. Solid Oxide Fuel Cell - Multi-Physics and GUI

    2013-10-10

    SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition inmore » tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.« less

  18. A General Framework for Multiphysics Modeling Based on Numerical Averaging

    NASA Astrophysics Data System (ADS)

    Lunati, I.; Tomin, P.

    2014-12-01

    In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of

  19. Modelling transport phenomena in a multi-physics context

    NASA Astrophysics Data System (ADS)

    Marra, Francesco

    2015-01-01

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.

  20. Modelling transport phenomena in a multi-physics context

    SciTech Connect

    Marra, Francesco

    2015-01-22

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.

  1. Parallel Algorithms and Software for Nuclear, Energy, and Environmental Applications. Part II: Multiphysics Software

    SciTech Connect

    Derek Gaston; Luanjing Guo; Glen Hansen; Hai Huang; Richard Johnson; Dana Knoll; Chris Newman; Hyeong Kae Park; Robert Podgorney; Michael Tonks; Richard Williamson

    2012-09-01

    This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy, and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.

  2. Verification of Multiphysics software: Space and time convergence studies for nonlinearly coupled applications

    SciTech Connect

    Jean C. Ragusa; Vijay Mahadevan; Vincent A. Mousseau

    2009-05-01

    High-fidelity modeling of nuclear reactors requires the solution of a nonlinear coupled multi-physics stiff problem with widely varying time and length scales that need to be resolved correctly. A numerical method that converges the implicit nonlinear terms to a small tolerance is often referred to as nonlinearly consistent (or tightly coupled). This nonlinear consistency is still lacking in the vast majority of coupling techniques today. We present a tightly coupled multiphysics framework that tackles this issue and present code-verification and convergence analyses in space and time for several models of nonlinear coupled physics.

  3. ACME algorithms for contact in a multiphysics environment API version 2.2.

    SciTech Connect

    Heinstein, Martin Wilhelm; Glass, Micheal W.; Gullerud, Arne S.; Brown, Kevin H.; Voth, Thomas Eugene; Jones, Reese E.

    2004-07-01

    An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

  4. Optimal Control of Thermo--Fluid Phenomena in Variable Domains

    NASA Astrophysics Data System (ADS)

    Volkov, Oleg; Protas, Bartosz

    2008-11-01

    This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.

  5. Electronic properties of graphene: A multiphysics simulation approach

    NASA Astrophysics Data System (ADS)

    Sule, Nishant

    Graphene is a single atomic layer of hexagonally arranged carbon atoms. Since the experimental discovery of graphene in 2004, a wealth of research has been conducted on studying its electronic and optical properties, as well as on developing novel applications. To explaining the typically observed electronic properties of graphene and to evaluate its potential in novel applications it is vital to quantitatively examine the intrinsic limits and the influence of the dominant extrinsic factors on the electromagnetic response of this material. The two-dimensional nature of graphene makes it vulnerable to the influence of a host of extrinsic factors, such as the interface phonons from the supporting substrate and trapped charged impurities near the interface between graphene and the substrate. In this dissertation, the electronic transport properties of graphene are examined in detail using multiphysics numerical simulations. Specifically, the following three aspects are studied: electron-phonon scattering rates and the intrinsic mobility, effect of clustered impurities on carrier transport, and substrate-dependent THz-frequency carrier transport. To calculate the electron-phonon scattering rates and predict the intrinsic mobility of graphene, the overlap between the electronic tight-binding Bloch wave functions (TB BWF), up to the third nearest neighbors, are used. Room-temperature carrier dynamics in suspended and supported graphene in the presence of different impurity distributions and densities is simulated using a numerical method that combines semiclassical carrier transport, using ensemble Monte-Carlo (EMC), with electrodynamics, using the finite-difference time-domain (FDTD) technique and molecular dynamics (MD). The electron-phonon scattering rates calculated using TB BWFs provide a better estimate of the ``bare'' acoustic and optical deformation potential constants (Dac = 12eV, Dop = 5 x 109 eV cm-1), while the intrinsic mobility calculated exceeds

  6. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    SciTech Connect

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  7. Numerical Stability and Accuracy of Temporally Coupled Multi-Physics Modules in Wind-Turbine CAE Tools

    SciTech Connect

    Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.

    2013-02-01

    In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.

  8. The atmospheric component of the Mediterranean Sea water budget in a WRF multi-physics ensemble and observations

    NASA Astrophysics Data System (ADS)

    Di Luca, Alejandro; Flaounas, Emmanouil; Drobinski, Philippe; Brossier, Cindy Lebeaupin

    2014-11-01

    The use of high resolution atmosphere-ocean coupled regional climate models to study possible future climate changes in the Mediterranean Sea requires an accurate simulation of the atmospheric component of the water budget (i.e., evaporation, precipitation and runoff). A specific configuration of the version 3.1 of the weather research and forecasting (WRF) regional climate model was shown to systematically overestimate the Mediterranean Sea water budget mainly due to an excess of evaporation (~1,450 mm yr-1) compared with observed estimations (~1,150 mm yr-1). In this article, a 70-member multi-physics ensemble is used to try to understand the relative importance of various sub-grid scale processes in the Mediterranean Sea water budget and to evaluate its representation by comparing simulated results with observed-based estimates. The physics ensemble was constructed by performing 70 1-year long simulations using version 3.3 of the WRF model by combining six cumulus, four surface/planetary boundary layer and three radiation schemes. Results show that evaporation variability across the multi-physics ensemble (˜10 % of the mean evaporation) is dominated by the choice of the surface layer scheme that explains more than ˜70 % of the total variance and that the overestimation of evaporation in WRF simulations is generally related with an overestimation of surface exchange coefficients due to too large values of the surface roughness parameter and/or the simulation of too unstable surface conditions. Although the influence of radiation schemes on evaporation variability is small (˜13 % of the total variance), radiation schemes strongly influence exchange coefficients and vertical humidity gradients near the surface due to modifications of temperature lapse rates. The precipitation variability across the physics ensemble (˜35 % of the mean precipitation) is dominated by the choice of both cumulus (˜55 % of the total variance) and planetary boundary layer (˜32 % of

  9. Micromechanical modeling of the multiphysical behavior of smart materials using the variational asymptotic method

    NASA Astrophysics Data System (ADS)

    Tang, Tian; Yu, Wenbin

    2009-12-01

    A multiphysics micromechanics model is developed to predict the effective properties as well as the local fields of periodic smart materials responsive to fully coupled electric, magnetic, thermal and mechanical fields. This work is based on the framework of the variational asymptotic method for unit cell homogenization (VAMUCH), a recently developed micromechanics modeling scheme. To treat the general microstructure of smart materials, we implemented this model using the finite element technique. Several examples of smart materials are used to demonstrate the application of the proposed model for prediction of multiphysical behavior. A preliminary version of this paper was presented at the 2008 ASME Conference on Smart Materials, Adaptive Structures and Intelligent Systems, Ellicott City, MD, USA.

  10. Analysis of image formation in optical coherence elastography using a multiphysics approach

    PubMed Central

    Chin, Lixin; Curatolo, Andrea; Kennedy, Brendan F.; Doyle, Barry J.; Munro, Peter R. T.; McLaughlin, Robert A.; Sampson, David D.

    2014-01-01

    Image formation in optical coherence elastography (OCE) results from a combination of two processes: the mechanical deformation imparted to the sample and the detection of the resulting displacement using optical coherence tomography (OCT). We present a multiphysics model of these processes, validated by simulating strain elastograms acquired using phase-sensitive compression OCE, and demonstrating close correspondence with experimental results. Using the model, we present evidence that the approximation commonly used to infer sample displacement in phase-sensitive OCE is invalidated for smaller deformations than has been previously considered, significantly affecting the measurement precision, as quantified by the displacement sensitivity and the elastogram signal-to-noise ratio. We show how the precision of OCE is affected not only by OCT shot-noise, as is usually considered, but additionally by phase decorrelation due to the sample deformation. This multiphysics model provides a general framework that could be used to compare and contrast different OCE techniques. PMID:25401007

  11. Verification of a Multiphysics Toolkit against the Magnetized Target Fusion Concept

    NASA Technical Reports Server (NTRS)

    Thomas, Scott; Perrell, Eric; Liron, Caroline; Chiroux, Robert; Cassibry, Jason; Adams, Robert B.

    2005-01-01

    In the spring of 2004 the Advanced Concepts team at MSFC embarked on an ambitious project to develop a suite of modeling routines that would interact with one another. The tools would each numerically model a portion of any advanced propulsion system. The tools were divided by physics categories, hence the name multiphysics toolset. Currently most of the anticipated modeling tools have been created and integrated. Results are given in this paper for both a quarter nozzle with chemically reacting flow and the interaction of two plasma jets representative of a Magnetized Target Fusion device. The results have not been calibrated against real data as of yet, but this paper demonstrates the current capability of the multiphysics tool and planned future enhancements

  12. COMSOL-based Multiphysics Simulations to Support HFIR s Conversion to LEU Fuel

    SciTech Connect

    Jain, Prashant K; Freels, James D; Cook, David Howard

    2011-01-01

    In this paper, development of at least one form of the COMSOL-based modeling framework for the HFIR is presented, key simulation steps are identified and several milestones achieved towards a coupled multi-physics capability are highlighted. COMSOL-based multi-physics simulation capability is able to answer the need for predictive 3D simulations of HFIR s involute plate and channels. Step-by-step development and analyses of the COMSOL models for the single and multi-channels will lead towards the desired full-core simulation capability for the HFIR. With very few experiments planned to support the conversion process, these 3D simulations will become the basis for the nuclear safety analysis of the HFIR s LEU fuel core.

  13. Multiphysics design optimization for aerospace applications: Case study on helicopter loading hanger

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Khawaja, H.; Moatamedi, M.

    2014-12-01

    This paper presents the Multiphysics technique applied in the design optimization of a loading hanger for an aerial crane. In this study, design optimization is applied on the geometric modelling of a part being used in an aerial crane operation. A set of dimensional and loading requirements are provided. Various geometric models are built using SolidWorks® Computer Aided Design (CAD) Package. In addition, Finite Element Method (FEM) is applied to study these geometric models using ANSYS® Multiphysics package. Appropriate material is chosen based on the strength to weight ratio. Efforts are made to optimize the geometry to reduce the weight of the part. Based on the achieved results, conclusions are drawn.

  14. Statistical modeling support for calibration of a multiphysics model of subcooled boiling flows

    SciTech Connect

    Bui, A. V.; Dinh, N. T.; Nourgaliev, R. R.; Williams, B. J.

    2013-07-01

    Nuclear reactor system analyses rely on multiple complex models which describe the physics of reactor neutronics, thermal hydraulics, structural mechanics, coolant physico-chemistry, etc. Such coupled multiphysics models require extensive calibration and validation before they can be used in practical system safety study and/or design/technology optimization. This paper presents an application of statistical modeling and Bayesian inference in calibrating an example multiphysics model of subcooled boiling flows which is widely used in reactor thermal hydraulic analysis. The presence of complex coupling of physics in such a model together with the large number of model inputs, parameters and multidimensional outputs poses significant challenge to the model calibration method. However, the method proposed in this work is shown to be able to overcome these difficulties while allowing data (observation) uncertainty and model inadequacy to be taken into consideration. (authors)

  15. Applications of ANSYS/Multiphysics at NASA/Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Loughlin, Jim

    2007-01-01

    This viewgraph presentation reviews some of the uses that the ANSYS/Multiphysics system is used for at the NASA Goddard Space Flight Center. Some of the uses of the ANSYS system is used for is MEMS Structural Analysis of Micro-mirror Array for the James Web Space Telescope (JWST), Micro-shutter Array for JWST, MEMS FP Tunable Filter, AstroE2 Micro-calorimeter. Various views of these projects are shown in this presentation.

  16. Multiphysics processes in partially saturated fractured rock: Experiments and models from Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Rutqvist, Jonny; Tsang, Chin-Fu

    2012-09-01

    The site investigations at Yucca Mountain, Nevada, have provided us with an outstanding data set, one that has significantly advanced our knowledge of multiphysics processes in partially saturated fractured geological media. Such advancement was made possible, foremost, by substantial investments in multiyear field experiments that enabled the study of thermally driven multiphysics and testing of numerical models at a large spatial scale. The development of coupled-process models within the project have resulted in a number of new, advanced multiphysics numerical models that are today applied over a wide range of geoscientific research and geoengineering applications. Using such models, the potential impact of thermal-hydrological-mechanical (THM) multiphysics processes over the long-term (e.g., 10,000 years) could be predicted and bounded with some degree of confidence. The fact that the rock mass at Yucca Mountain is intensively fractured enabled continuum models to be used, although discontinuum models were also applied and are better suited for analyzing some issues, especially those related to predictions of rockfall within open excavations. The work showed that in situ tests (rather than small-scale laboratory experiments alone) are essential for determining appropriate input parameters for multiphysics models of fractured rocks, especially related to parameters defining how permeability might evolve under changing stress and temperature. A significant laboratory test program at Yucca Mountain also made important contributions to the field of rock mechanics, showing a unique relation between porosity and mechanical properties, a time dependency of strength that is significant for long-term excavation stability, a decreasing rock strength with sample size using very large core experiments, and a strong temperature dependency of the thermal expansion coefficient for temperatures up to 200°C. The analysis of in situ heater experiments showed that fracture

  17. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  18. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    SciTech Connect

    Downar, Thomas; Seker, Volkan

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  1. Analysis of Material Sample Heated by Impinging Hot Hydrogen Jet in a Non-Nuclear Tester

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Foote, John; Litchford, Ron

    2006-01-01

    A computational conjugate heat transfer methodology was developed and anchored with data obtained from a hot-hydrogen jet heated, non-nuclear materials tester, as a first step towards developing an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective and thermal radiative, and conjugate heat transfers. Predicted hot hydrogen jet and material surface temperatures were compared with those of measurement. Predicted solid temperatures were compared with those obtained with a standard heat transfer code. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.

  2. Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem

    SciTech Connect

    Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.

    2015-12-21

    This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.

  3. Progress on the Multiphysics Capabilities of the Parallel Electromagnetic ACE3P Simulation Suite

    SciTech Connect

    Kononenko, Oleksiy

    2015-03-26

    ACE3P is a 3D parallel simulation suite that is being developed at SLAC National Accelerator Laboratory. Effectively utilizing supercomputer resources, ACE3P has become a key tool for the coupled electromagnetic, thermal and mechanical research and design of particle accelerators. Based on the existing finite-element infrastructure, a massively parallel eigensolver is developed for modal analysis of mechanical structures. It complements a set of the multiphysics tools in ACE3P and, in particular, can be used for the comprehensive study of microphonics in accelerating cavities ensuring the operational reliability of a particle accelerator.

  4. Parallel adaptive Cartesian upwind methods for shock-driven multiphysics simulation

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    The multiphysics fluid-structure interaction simulation of shock-loaded thin-walled structures requires the dynamic coupling of a shock-capturing flow solver to a solid mechanics solver for large deformations. By combining a Cartesian embedded boundary approach with dynamic mesh adaptation a generic software framework for such flow solvers has been constructed that allows easy exchange of the specific hydrodynamic finite volume upwind scheme and coupling to various explicit finite element solid dynamics solvers. The paper gives an overview of the computational approach and presents first simulations that couple the software to the general purpose solid dynamics code DYNA3D.

  5. Thermal Analysis of SRF Cavity Couplers Using Parallel Multiphysics Tool TEM3P

    SciTech Connect

    Akcelik, V; Lee, L.-Q.; Li, Z.; Ng, C.-K.; Ko, K.; Cheng, G.; Rimmer, R.; Wang, H.; /Jefferson Lab

    2009-05-20

    SLAC has developed a multi-physics simulation code TEM3P for simulating integrated effects of electromagnetic, thermal and structural loads. TEM3P shares the same software infrastructure with SLAC's parallel finite element electromagnetic codes, thus enabling all physics simulations within a single framework. The finite-element approach allows high-fidelity, high-accuracy simulations and the parallel implementation facilitates large-scale computation with fast turnaround times. In this paper, TEM3P is used to analyze thermal loading at coupler end of the JLAB SRF cavity.

  6. Thermal Analysis of SRF Cavity Couplers Using Parallel Multiphysics Tool TEM3P

    SciTech Connect

    Akcelik, V, Lee, L.-Q., Li, Z., Ng, C.-K., Ko, K.,Cheng, G., Rimmer, R., Wang, H.

    2009-05-01

    SLAC has developed a multi-physics simulation code TEM3P for simulating integrated effects of electromagnetic, thermal and structural loads. TEM3P shares the same software infrastructure with SLAC’s paralell finite element electromagnetic codes, thus enabling all physics simulations within a single framework. The finite-element approach allows high fidelity, high-accuracy simulations and the parallel implementation facilitates large-scale computation with fast turnaround times. In this paper, TEM3P is used to analyze thermal loading at coupler end of the JLAB SRF cavity.

  7. Object-oriented design patterns for multiphysics modeling in Fortran 2003.

    SciTech Connect

    Adalsteinsson, Helgi; Rouson, Damian; Xia, Jim

    2008-04-01

    The objectives of this presentation are to: catalog object-oriented software design patterns for multiphysics modeling; demonstrate them in Fortran 2003 and C++; and compare the capabilities of the two languages. The conclusions are: the presented patterns integrate multiple abstractions, allowing much of the numerics and physics to be determined at compile-time or runtime; negligible lines of Fortran emulate the required C++ features; and C++ requires considerable effort (or considerable reliance on libraries to relive that effort) to emulate the required Fortran 2003 features.

  8. Design and multiphysics analysis of a 176Â MHz continuous-wave radio-frequency quadrupole

    NASA Astrophysics Data System (ADS)

    Kutsaev, S. V.; Mustapha, B.; Ostroumov, P. N.; Barcikowski, A.; Schrage, D.; Rodnizki, J.; Berkovits, D.

    2014-07-01

    We have developed a new design for a 176 MHz cw radio-frequency quadrupole (RFQ) for the SARAF upgrade project. At this frequency, the proposed design is a conventional four-vane structure. The main design goals are to provide the highest possible shunt impedance while limiting the required rf power to about 120 kW for reliable cw operation, and the length to about 4 meters. If built as designed, the proposed RFQ will be the first four-vane cw RFQ built as a single cavity (no resonant coupling required) that does not require π-mode stabilizing loops or dipole rods. For this, we rely on very detailed 3D simulations of all aspects of the structure and the level of machining precision achieved on the recently developed ATLAS upgrade RFQ. A full 3D model of the structure including vane modulation was developed. The design was optimized using electromagnetic and multiphysics simulations. Following the choice of the vane type and geometry, the vane undercuts were optimized to produce a flat field along the structure. The final design has good mode separation and should not need dipole rods if built as designed, but their effect was studied in the case of manufacturing errors. The tuners were also designed and optimized to tune the main mode without affecting the field flatness. Following the electromagnetic (EM) design optimization, a multiphysics engineering analysis of the structure was performed. The multiphysics analysis is a coupled electromagnetic, thermal and mechanical analysis. The cooling channels, including their paths and sizes, were optimized based on the limiting temperature and deformation requirements. The frequency sensitivity to the RFQ body and vane cooling water temperatures was carefully studied in order to use it for frequency fine-tuning. Finally, an inductive rf power coupler design based on the ATLAS RFQ coupler was developed and simulated. The EM design optimization was performed using cst Microwave Studio and the results were verified using

  9. LDRD Final Report-New Directions for Algebraic Multigrid: Solutions for Large Scale Multiphysics Problems

    SciTech Connect

    Henson, V E

    2003-02-06

    The purpose of this research project was to investigate, design, and implement new algebraic multigrid (AMG) algorithms to enable the effective use of AMG in large-scale multiphysics simulation codes. These problems are extremely large; storage requirements and excessive run-time make direct solvers infeasible. The problems are highly ill-conditioned, so that existing iterative solvers either fail or converge very slowly. While existing AMG algorithms have been shown to be robust and stable for a large class of problems, there are certain problems of great interest to the Laboratory for which no effective algorithm existed prior to this research.

  10. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  11. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  12. Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.

  13. Conductance Thin Film Model of Flexible Organic Thin Film Device using COMSOL Multiphysics

    NASA Astrophysics Data System (ADS)

    Carradero-Santiago, Carolyn; Vedrine-Pauléus, Josee

    We developed a virtual model to analyze the electrical conductivity of multilayered thin films placed above a graphene conducting and flexible polyethylene terephthalate (PET) substrate. The organic layers of poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) as a hole conducting layer, poly(3-hexylthiophene-2,5-diyl) (P3HT), as a p-type, phenyl-C61-butyric acid methyl ester (PCBM) and as n-type, with aluminum as a top conductor. COMSOL Multiphysics was the software we used to develop the virtual model to analyze potential variations and conductivity through the thin-film layers. COMSOL Multiphysics software allows simulation and modeling of physical phenomena represented by differential equations such as heat transfer, fluid flow, electromagnetism, and structural mechanics. In this work, using the AC/DC, electric currents module we defined the geometry of the model and properties for each of the six layers: PET/graphene/PEDOT:PSS/P3HT/PCBM/aluminum. We analyzed the model with varying thicknesses of graphene and active layers (P3HT/PCBM). This simulation allowed us to analyze the electrical conductivity, and visualize the model with varying voltage potential, or bias across the plates, useful for applications in solar cell devices.

  14. Advanced computations of multi-physics, multi-scale effects in beam dynamics

    SciTech Connect

    Amundson, J.F.; Macridin, A.; Spentzouris, P.; Stern, E.G.; /Fermilab

    2009-01-01

    Current state-of-the-art beam dynamics simulations include multiple physical effects and multiple physical length and/or time scales. We present recent developments in Synergia2, an accelerator modeling framework designed for multi-physics, multi-scale simulations. We summarize recent several recent results in multi-physics beam dynamics, including simulations of three Fermilab accelerators: the Tevatron, the Main Injector and the Debuncher. Early accelerator simulations focused on single-particle dynamics. To a first approximation, the forces on the particles in an accelerator beam are dominated by the external fields due to magnets, RF cavities, etc., so the single-particle dynamics are the leading physical effects. Detailed simulations of accelerators must include collective effects such as the space-charge repulsion of the beam particles, the effects of wake fields in the beam pipe walls and beam-beam interactions in colliders. These simulations require the sort of massively parallel computers that have only become available in recent times. We give an overview of the accelerator framework Synergia2, which was designed to take advantage of the capabilities of modern computational resources and enable simulations of multiple physical effects. We also summarize some recent results utilizing Synergia2 and BeamBeam3d, a tool specialized for beam-beam simulations.

  15. Multiphysics modelling, quantum chemistry and risk analysis for corrosion inhibitor design and lifetime prediction.

    PubMed

    Taylor, C D; Chandra, A; Vera, J; Sridhar, N

    2015-01-01

    Organic corrosion inhibitors can provide an effective means to extend the life of equipment in aggressive environments, decrease the environmental, economic, health and safety risks associated with corrosion failures and enable the use of low cost steels in place of corrosion resistant alloys. To guide the construction of advanced models for the design and optimization of the chemical composition of organic inhibitors, and to develop predictive tools for inhibitor performance as a function of alloy and environment, a multiphysics model has been constructed following Staehle's principles of "domains and microprocesses". The multiphysics framework provides a way for science-based modelling of the various phenomena that impact inhibitor efficiency, including chemical thermodynamics and speciation, oil/water partitioning, effect of the inhibitor on multiphase flow, surface adsorption and self-assembled monolayer formation, and the effect of the inhibitor on cathodic and anodic reaction pathways. The fundamental tools required to solve the resulting modelling from a first-principles perspective are also described. Quantification of uncertainty is significant to the development of lifetime prediction models, due to their application for risk management. We therefore also discuss how uncertainty analysis can be coupled with the first-principles approach laid out in this paper. PMID:25912625

  16. The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation

    SciTech Connect

    Foley, Samantha S; Elwasif, Wael R; Bernholdt, David E

    2011-11-01

    High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels of parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.

  17. Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.

  18. Advanced Multiphysics Thermal-Hydraulics Models for the High Flux Isotope Reactor

    SciTech Connect

    Jain, Prashant K; Freels, James D

    2015-01-01

    Engineering design studies to determine the feasibility of converting the High Flux Isotope Reactor (HFIR) from using highly enriched uranium (HEU) to low-enriched uranium (LEU) fuel are ongoing at Oak Ridge National Laboratory (ORNL). This work is part of an effort sponsored by the US Department of Energy (DOE) Reactor Conversion Program. HFIR is a very high flux pressurized light-water-cooled and moderated flux-trap type research reactor. HFIR s current missions are to support neutron scattering experiments, isotope production, and materials irradiation, including neutron activation analysis. Advanced three-dimensional multiphysics models of HFIR fuel were developed in COMSOL software for safety basis (worst case) operating conditions. Several types of physics including multilayer heat conduction, conjugate heat transfer, turbulent flows (RANS model) and structural mechanics were combined and solved for HFIR s inner and outer fuel elements. Alternate design features of the new LEU fuel were evaluated using these multiphysics models. This work led to a new, preliminary reference LEU design that combines a permanent absorber in the lower unfueled region of all of the fuel plates, a burnable absorber in the inner element side plates, and a relocated and reshaped (but still radially contoured) fuel zone. Preliminary results of estimated thermal safety margins are presented. Fuel design studies and model enhancement continue.

  19. Modelling in conventional electroporation for model cell with organelles using COMSOL Multiphysics

    NASA Astrophysics Data System (ADS)

    Sulaeman, M. Y.; Widita, R.

    2016-03-01

    Conventional electroporation is a formation of pores in the membrane cell due to the external electric field applied to the cell. The purpose of creating pores in the cell using conventional electroporation are to increase the effectiveness of chemotherapy (electrochemotherapy) and to kill cancer tissue using irreversible electroporation. Modeling of electroporation phenomenon on a model cell had been done by using software COMSOL Multiphysics 4.3b with the applied external electric field with intensity at 1.1 kV/cm to find transmembrane voltage and pore density. It can be concluded from the results of potential distribution and transmembrane voltage, it show that pores formation only occurs in the membrane cells and it could not penetrate into inside the model cell so there is not pores formation in its organells.

  20. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    SciTech Connect

    Merzari, E.; Shemon, E. R.; Yu, Y. Q.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Tautges, Timothy; Solberg, Jerome; Ferencz, Robert Mark; Whitesides, R.

    2015-12-21

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.

  1. Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality

    SciTech Connect

    Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis

    2015-03-01

    This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.

  2. Mechanical behavior simulation of MEMS-based cantilever beam using COMSOL multiphysics

    SciTech Connect

    Acheli, A. Serhane, R.

    2015-03-30

    This paper presents the studies of mechanical behavior of MEMS cantilever beam made of poly-silicon material, using the coupling of three application modes (plane strain, electrostatics and the moving mesh) of COMSOL Multi-physics software. The cantilevers playing a key role in Micro Electro-Mechanical Systems (MEMS) devices (switches, resonators, etc) working under potential shock. This is why they require actuation under predetermined conditions, such as electrostatic force or inertial force. In this paper, we present mechanical behavior of a cantilever actuated by an electrostatic force. In addition to the simplification of calculations, the weight of the cantilever was not taken into account. Different parameters like beam displacement, electrostatics force and stress over the beam have been calculated by finite element method after having defining the geometry, the material of the cantilever model (fixed at one of ends but is free to move otherwise) and his operational space.

  3. Multiphysics Modeling of an Annular Linear Induction Pump With Applications to Space Nuclear Power Systems

    NASA Technical Reports Server (NTRS)

    Kilbane, J.; Polzin, K. A.

    2014-01-01

    An annular linear induction pump (ALIP) that could be used for circulating liquid-metal coolant in a fission surface power reactor system is modeled in the present work using the computational COMSOL Multiphysics package. The pump is modeled using a two-dimensional, axisymmetric geometry and solved under conditions similar to those used during experimental pump testing. Real, nonlinear, temperature-dependent material properties can be incorporated into the model for both the electrically-conducting working fluid in the pump (NaK-78) and structural components of the pump. The intricate three-phase coil configuration of the pump is implemented in the model to produce an axially-traveling magnetic wave that is qualitatively similar to the measured magnetic wave. The model qualitatively captures the expected feature of a peak in efficiency as a function of flow rate.

  4. Multiscale Multiphysics-Based Modeling and Analysis on the Tool Wear in Micro Drilling

    NASA Astrophysics Data System (ADS)

    Niu, Zhichao; Cheng, Kai

    2016-02-01

    In micro-cutting processes, process variables including cutting force, cutting temperature and drill-workpiece interfacing conditions (lubrication and interaction, etc.) significantly affect the tool wear in a dynamic interactive in-process manner. The resultant tool life and cutting performance directly affect the component surface roughness, material removal rate and form accuracy control, etc. In this paper, a multiscale multiphysics oriented approach to modeling and analysis is presented particularly on tooling performance in micro drilling processes. The process optimization is also taken account based on establishing the intrinsic relationship between process parameters and cutting performance. The modeling and analysis are evaluated and validated through well-designed machining trials, and further supported by metrology measurements and simulations. The paper is concluded with a further discussion on the potential and application of the approach for broad micro manufacturing purposes.

  5. The multiphysics analysis of the metallic bipolar plate by the electrochemical micro-machining fabrication process

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Ming; Lee, Shuo-Jen; Lee, Chi-Yuan; Chang, Dar-Yuan

    In this study, the flow channels of a PEM fuel cell are fabricated by the EMM process. The parametric effects of the process are studied by both numerical simulation and experimental tests. For the numerical simulation, the multiphysics model, consisting of electrical field, convection, and diffusion phenomena is applied using COMSOL software. COMSOL software is used to predict the parametric effects of the channel fabrication accuracy such as pulse rate, pulse duty cycle, inter-electrode gap and electrolytic inflow velocity. The proper experimental parameters and the relationship between the parameters and the distribution of metal removal are established from the simulated results. The experimental fabrication tests showed that a shorter pulse rate and a higher pulse current improved the fabrication accuracy, and is consistent with the numerical simulation results. The proposed simulation model could be employed as a predictive tool to provide optimal parameters for better machining accuracy and process stability of the EMM process.

  6. Multiphysics Simulations of Hot-Spot Initiation in Shocked Insensitive High-Explosive

    NASA Astrophysics Data System (ADS)

    Najjar, Fady; Howard, W. M.; Fried, L. E.

    2010-11-01

    Solid plastic-bonded high-explosive materials consist of crystals with micron-sized pores embedded. Under mechanical or thermal insults, these voids increase the ease of shock initiation by generating high-temperature regions during their collapse that might lead to ignition. Understanding the mechanisms of hot-spot initiation has significant research interest due to safety, reliability and development of new insensitive munitions. Multi-dimensional high-resolution meso-scale simulations are performed using the multiphysics software, ALE3D, to understand the hot-spot initiation. The Cheetah code is coupled to ALE3D, creating multi-dimensional sparse tables for the HE properties. The reaction rates were obtained from MD Quantum computations. Our current predictions showcase several interesting features regarding hot spot dynamics including the formation of a "secondary" jet. We will discuss the results obtained with hydro-thermo-chemical processes leading to ignition growth for various pore sizes and different shock pressures.

  7. Multi-physics nuclear reactor simulator for advanced nuclear engineering education

    SciTech Connect

    Yamamoto, A.

    2012-07-01

    Multi-physics nuclear reactor simulator, which aims to utilize for advanced nuclear engineering education, is being introduced to Nagoya Univ.. The simulator consists of the 'macroscopic' physics simulator and the 'microscopic' physics simulator. The former performs real time simulation of a whole nuclear power plant. The latter is responsible to more detail numerical simulations based on the sophisticated and precise numerical models, while taking into account the plant conditions obtained in the macroscopic physics simulator. Steady-state and kinetics core analyses, fuel mechanical analysis, fluid dynamics analysis, and sub-channel analysis can be carried out in the microscopic physics simulator. Simulation calculations are carried out through dedicated graphical user interface and the simulation results, i.e., spatial and temporal behaviors of major plant parameters are graphically shown. The simulator will provide a bridge between the 'theories' studied with textbooks and the 'physical behaviors' of actual nuclear power plants. (authors)

  8. Complimentary single technique and multi-physics modeling tools for NDE challenges

    NASA Astrophysics Data System (ADS)

    Le Lostec, Nechtan; Budyn, Nicolas; Sartre, Bernard; Glass, S. W.

    2014-02-01

    The challenges of modeling and simulation for Non Destructive Examination (NDE) research and development at AREVA NDE Solutions Technical Center (NETEC) are presented. In particular, the choice of a relevant software suite covering different applications and techniques and the process/scripting tools required for simulation and modeling are discussed. The software portfolio currently in use is then presented along with the limitations of the different software: CIVA for ultrasound (UT) methods, PZFlex for UT probes, Flux for eddy current (ET) probes and methods, plus Abaqus for multiphysics modeling. The finite element code, Abaqus is also considered as the future direction for many of our NDE modeling and simulation tasks. Some application examples are given on modeling of a piezoelectric acoustic phased array transducer and preliminary thermography configurations.

  9. Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software

    SciTech Connect

    Tong, Charles; Chen, Xiao; Iaccarino, Gianluca; Mittal, Akshay

    2013-10-08

    In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.

  10. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGESBeta

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger P.; Clarno, Kevin T.; Simunovic, Srdjan; Slattery, Stuart R.; Turner, John A.; Palmtag, Scott

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  11. Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows

    NASA Astrophysics Data System (ADS)

    Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.

    2014-12-01

    For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.

  12. An approach for coupled-code multiphysics core simulations from a common input

    SciTech Connect

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger P.; Clarno, Kevin T.; Simunovic, Srdjan; Slattery, Stuart R.; Turner, John A.; Palmtag, Scott

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.

  13. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  14. Phase-field model simulation of ferroelectric/antiferroelectric materials microstructure evolution under multiphysics loading

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyi

    Ferroelectric (FE) and closely related antiferroelectric (AFE) materials have unique electromechanical properties that promote various applications in the area of capacitors, sensors, generators (FE) and high density energy storage (AFE). These smart materials with extensive applications have drawn wide interest in the industrial and scientific world because of their reliability and tunable property. However, reliability issues changes its paradigms and requires guidance from detailed mechanism theory as the materials applications are pushed for better performance. A host of modeling work were dedicated to study the macro-structural behavior and microstructural evolution in FE and AFE material under various conditions. This thesis is focused on direct observation of domain evolution under multiphysics loading for both FE and AFE material. Landau-Devonshire time-dependent phase field models were built for both materials, and were simulated in finite element software Comsol. In FE model, dagger-shape 90 degree switched domain was observed at preexisting crack tip under pure mechanical loading. Polycrystal structure was tested under same condition, and blocking effect of the growth of dagger-shape switched domain from grain orientation difference and/or grain boundary was directly observed. AFE ceramic model was developed using two sublattice theory, this model was used to investigate the mechanism of energy efficiency increase with self-confined loading in experimental tests. Consistent results was found in simulation and careful investigation of calculation results gave confirmation that origin of energy density increase is from three aspects: self-confinement induced inner compression field as the cause of increase of critical field, fringe leak as the source of elevated saturation polarization and uneven defects distribution as the reason for critical field shifting and phase transition speed. Another important affecting aspect in polycrystalline materials is the

  15. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  16. Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope

    NASA Astrophysics Data System (ADS)

    Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao

    2015-10-01

    X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through

  17. Validation and Calibration of Nuclear Thermal Hydraulics Multiscale Multiphysics Models - Subcooled Flow Boiling Study

    SciTech Connect

    Anh Bui; Nam Dinh; Brian Williams

    2013-09-01

    In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Such sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable

  18. The Role of Data Transfer on the Selection of a Single vs Multiple Mesh Architecture for Tightly Coupled Multiphysics Applications

    SciTech Connect

    Richard W. Johnson; Glen A. Hansen; Christopher K Newman

    2011-07-01

    Data transfer from one distinct mesh to another may be necessary in any number of applications, including prolongation operations supporting multigrid solution methods, spatial adaptation, remeshing, and arbitrary Lagrangian-Eulerian (ALE) and multiphysics simulation. This data transfer process is also referred to as remapping, rezoning and interpolation. Intermesh data transfer has the potential to introduce error into a simulation; the magnitude and importance of which depends on the transfer scenario and the algorithm used to perform the transfer. For a transient analysis, data transfer may occur many times during a simulation, with possible error accumulation at each transfer. The present study develops selected scenarios that illustrate data transfer error and how it might impact an analysis. This study examines remapping error by using static analytical functions to compare various remapping schemes. It also investigates the significance and nature of data transfer error for a simple multiphysics system involving a transient coupled system of partial differential equations. It concludes that remapping error can be significant both for static functions and for coupled multiphysics systems. Aggregate error is shown to be a function of remapping scheme, mesh coarseness, nature of the remapped function and mesh disparity. In cases of extreme mesh disparity, this study shows that remapping can lead to excessive error and even to solution instability. Further, this work motivates that remapping error should be included in the estimation of numerical error, if data transfer is employed in a numerical simulation.

  19. Multiscale Modeling of Nano-scale Phenomena: Towards a Multiphysics Simulation Capability for Design and Optimization of Sensor Systems

    SciTech Connect

    Becker, R; McElfresh, M; Lee, C; Balhorn, R; White, D

    2003-12-01

    In this white paper, a road map is presented to establish a multiphysics simulation capability for the design and optimization of sensor systems that incorporate nanomaterials and technologies. The Engineering Directorate's solid/fluid mechanics and electromagnetic computer codes will play an important role in both multiscale modeling and integration of required physics issues to achieve a baseline simulation capability. Molecular dynamic simulations performed primarily in the BBRP, CMS and PAT directorates, will provide information for the construction of multiscale models. All of the theoretical developments will require closely coupled experimental work to develop material models and validate simulations. The plan is synergistic and complimentary with the Laboratory's emerging core competency of multiscale modeling. The first application of the multiphysics computer code is the simulation of a ''simple'' biological system (protein recognition utilizing synthesized ligands) that has a broad range of applications including detection of biological threats, presymptomatic detection of illnesses, and drug therapy. While the overall goal is to establish a simulation capability, the near-term work is mainly focused on (1) multiscale modeling, i.e., the development of ''continuum'' representations of nanostructures based on information from molecular dynamics simulations and (2) experiments for model development and validation. A list of LDRDER proposals and ongoing projects that could be coordinated to achieve these near-term objectives and demonstrate the feasibility and utility of a multiphysics simulation capability is given.

  20. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  1. Using COMSOL Multiphysics Software to Analyze the Thin Film Resistance Model of a Conductor on PET

    NASA Astrophysics Data System (ADS)

    Carradero-Santiago, Carolyn; Merced-Sanabria, Milzaida; Vedrine-Pauléus, Josee

    2015-03-01

    In this research work, we will develop a virtual model to analyze the electrical conductivity of a thin film with three layers, one of graphene or conducting metal film, polyethylene terephthalate (PET) and Poly(3,4-ethylenedioxythiophene) Polystyrene sulfonate (PEDOT:PSS). COMSOL Multiphysics will be the software use to develop the virtual model to analyze the thin-film layers. COMSOL software allows simulation and modelling of physical phenomena represented by differential equations such as that of heat transfer, fluid movement, electromagnetism and structural mechanics. In the work, we will define the geometry of the model; in this case we want three layers-PET, the conducting layer and PEDOT:PSS. We will then add the materials and assign PET as the lower layer, the above conductor as the middle layer and the PEDOT:PSS as the upper layer. We will analyze the model with varying thickness of the top conducting layer. This simulation will allow us to analyze the electrical conductivity, and visualize the model with varying voltage potential, or bias across the plates.

  2. Coupling between a multi-physics workflow engine and an optimization framework

    NASA Astrophysics Data System (ADS)

    Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.

    2016-03-01

    A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.

  3. Multiphysics numerical modeling of the continuous flow microwave-assisted transesterification process.

    PubMed

    Muley, Pranjali D; Boldor, Dorin

    2012-01-01

    Use of advanced microwave technology for biodiesel production from vegetable oil is a relatively new technology. Microwave dielectric heating increases the process efficiency and reduces reaction time. Microwave heating depends on various factors such as material properties (dielectric and thermo-physical), frequency of operation and system design. Although lab scale results are promising, it is important to study these parameters and optimize the process before scaling up. Numerical modeling approach can be applied for predicting heating and temperature profiles including at larger scale. The process can be studied for optimization without actually performing the experiments, reducing the amount of experimental work required. A basic numerical model of continuous electromagnetic heating of biodiesel precursors was developed. A finite element model was built using COMSOL Multiphysics 4.2 software by coupling the electromagnetic problem with the fluid flow and heat transfer problem. Chemical reaction was not taken into account. Material dielectric properties were obtained experimentally, while the thermal properties were obtained from the literature (all the properties were temperature dependent). The model was tested for the two different power levels 4000 W and 4700 W at a constant flow rate of 840ml/min. The electric field, electromagnetic power density flow and temperature profiles were studied. Resulting temperature profiles were validated by comparing to the temperatures obtained at specific locations from the experiment. The results obtained were in good agreement with the experimental data. PMID:24432470

  4. Multi-physics model of a thermo-magnetic energy harvester

    NASA Astrophysics Data System (ADS)

    Joshi, Keyur B.; Priya, Shashank

    2013-05-01

    Harvesting small thermal gradients effectively to generate electricity still remains a challenge. Ujihara et al (2007 Appl. Phys. Lett. 91 093508) have recently proposed a thermo-magnetic energy harvester that incorporates a combination of hard and soft magnets on a vibrating beam structure and two opposing heat transfer surfaces. This design has many advantages and could present an optimum solution to harvest energy in low temperature gradient conditions. In this paper, we describe a multi-physics numerical model for this harvester configuration that incorporates all the relevant parameters, including heat transfer, magnetic force, beam vibration, contact surface and piezoelectricity. The model was used to simulate the complete transient behavior of the system. Results are presented for the evolution of the magnetic force, changes in the internal temperature of the soft magnet (gadolinium (Gd)), thermal contact conductance, contact pressure and heat transfer over a complete cycle. Variation of the vibration frequency with contact stiffness and gap distance was also modeled. Limit cycle behavior and its bifurcations are illustrated as a function of device parameters. The model was extended to include a piezoelectric energy harvesting mechanism and, using a piezoelectric bimorph as spring material, a maximum power of 318 μW was predicted across a 100 kΩ external load.

  5. An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations

    SciTech Connect

    Michael R Tonks; Derek R Gaston; Paul C Millett; David Andrs; Paul Talbot

    2012-01-01

    The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of new models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.

  6. Final report on LDRD project : coupling strategies for multi-physics applications.

    SciTech Connect

    Hopkins, Matthew Morgan; Moffat, Harry K.; Carnes, Brian; Hooper, Russell Warren; Pawlowski, Roger P.

    2007-11-01

    Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveraged existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.

  7. Validation of a 3D multi-physics model for unidirectional silicon solidification

    NASA Astrophysics Data System (ADS)

    Simons, Philip; Lankhorst, Adriaan; Habraken, Andries; Faber, Anne-Jans; Tiuleanu, Dumitru; Pingel, Roger

    2012-02-01

    A model for transient movements of solidification fronts has been added to X-stream, an existing multi-physics simulation program for high temperature processes with flow and chemical reactions. The implementation uses an enthalpy formulation and works on fixed grids. First we show the results of a 2D tin solidification benchmark case, which allows a comparison of X-stream to two other codes and to measurements. Second, a complete 3D solar silicon Heat Exchange Method (HEM) furnace, as built by PVA TePla is modeled. Here, it was necessary to model the complete geometry including the quartz crucible, radiative heaters, bottom cooling, inert flushing gas, etc. For one specific recipe of the transient heater power steering, PVA TePla conducted dip-rod measurements of the silicon solidification front position as function of time. This yields a validation of the model when applied to a real life industrial crystallization process. The results indicate that melt convection does influence the energy distribution up to the start of crystallization at the crucible bottom. But from that point on, the release of latent heat seems to dominate the solidification process, and convection in the melt does not significantly influence the transient front shape.

  8. A novel medical image data-based multi-physics simulation platform for computational life sciences

    PubMed Central

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-01-01

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models. PMID:24427518

  9. An RCM multi-physics ensemble over Europe: multi-variable evaluation to avoid error compensation

    NASA Astrophysics Data System (ADS)

    García-Díez, Markel; Fernández, Jesús; Vautard, Robert

    2015-12-01

    Regional Climate Models are widely used tools to add detail to the coarse resolution of global simulations. However, these are known to be affected by biases. Usually, published model evaluations use a reduced number of variables, frequently precipitation and temperature. Due to the complexity of the models, this may not be enough to assess their physical realism (e.g. to enable a fair comparison when weighting ensemble members). Furthermore, looking at only a few variables makes difficult to trace model errors. Thus, in many previous studies, these biases are described but their underlying causes and mechanisms are often left unknown. In this work the ability of a multi-physics ensemble in reproducing the observed climatologies of many variables over Europe is analysed. These are temperature, precipitation, cloud cover, radiative fluxes and total soil moisture content. It is found that, during winter, the model suffers a significant cold bias over snow covered regions. This is shown to be related with a poor representation of the snow-atmosphere interaction, and is amplified by an albedo feedback. It is shown how two members of the ensemble are able to alleviate this bias, but by generating a too large cloud cover. During summer, a large sensitivity to the cumulus parameterization is found, related to large differences in the cloud cover and short wave radiation flux. Results also show that small errors in one variable are sometimes a result of error compensation, so the high dimensionality of the model evaluation problem cannot be disregarded.

  10. DAG Software Architectures for Multi-Scale Multi-Physics Problems at Petascale and Beyond

    NASA Astrophysics Data System (ADS)

    Berzins, Martin

    2015-03-01

    The challenge of computations at Petascale and beyond is to ensure how to make possible efficient calculations on possibly hundreds of thousands for cores or on large numbers of GPUs or Intel Xeon Phis. An important methodology for achieving this is at present thought to be that of asynchronous task-based parallelism. The success of this approach will be demonstrated using the Uintah software framework for the solution of coupled fluid-structure interaction problems with chemical reactions. The layered approach of this software makes it possible for the user to specify the physical problems without parallel code, for that specification to be translated into a parallel set of tasks. These tasks are executed using a runtime system that executes tasks asynchronously and sometimes out-of-order. The scalability and portability of this approach will be demonstrated using examples from large scale combustion problems, industrial detonations and multi-scale, multi-physics models. The challenges of scaling such calculations to the next generations of leadership class computers (with more than a hundred petaflops) will be discussed. Thanks to NSF, XSEDE, DOE NNSA, DOE NETL, DOE ALCC and DOE INCITE.

  11. Quench-Induced Stresses in AA2618 Forgings for Impellers: A Multiphysics and Multiscale Problem

    NASA Astrophysics Data System (ADS)

    Chobaut, Nicolas; Saelzle, Peter; Michel, Gilles; Carron, Denis; Drezet, Jean-Marie

    2015-05-01

    In the fabrication of heat-treatable aluminum parts such as AA2618 compressor impellers for turbochargers, solutionizing and quenching are key steps to obtain the required mechanical characteristics. Fast quenching is necessary to avoid coarse precipitation as it reduces the mechanical properties obtained after heat treatment. However, fast quenching induces residual stresses that can cause unacceptable distortions during machining. Furthermore, the remaining residual stresses after final machining can lead to unfavorable stresses in service. Predicting and controlling internal stresses during the whole processing from heat treatment to final machining is therefore of particular interest to prevent negative impacts of residual stresses. This problem is multiphysics because processes such as heat transfer during quenching, precipitation phenomena, thermally induced deformations, and stress generation are interacting and need to be taken into account. The problem is also multiscale as precipitates of nanosize form during quenching at locations where the cooling rate is too low. This precipitation affects the local yield strength of the material and thus impacts the level of macroscale residual stresses. A thermomechanical model accounting for precipitation in a simple but realistic way is presented. Instead of modelling precipitation that occurs during quenching, the model parameters are identified using a limited number of tensile tests achieved after representative interrupted cooling paths in a Gleeble machine. The simulation results are compared with as-quenched residual stresses in a forging measured by neutron diffraction.

  12. A multiphysics phase field model on melting and kinetic superheating of aluminum nanolayer and nanoparticle

    NASA Astrophysics Data System (ADS)

    Hwang, Yong Seok

    It has been found during the last decade that a nanoscale melting of metal has very distinctive features compared to its microscale counterpart. It has been observed that a highly non-equilibrium state can result in extreme superheating of a solid state, which cannot be explained well by thermodynamic theories based on equilibrium or nucleation. An endeavor to find the superheating limit and mechanisms of melting and superheating becomes more complicated when various physical phenomena are involved at the similar scales. The main goal of this research is to establish a multiphysics model and to reveal the mechanism of melting and kinetic superheating of a metal nanostructure at high heating rates. The model includes elastodynamics, a fast heating of metal considering a delayed heat transfer between electron gas and lattice phonon and couplings among physical phenomena, and phase transformation incorporated with thermal fluctuation. The model successfully reproduces two independent experiments and several novel nanoscale physical phenomena are discovered. For example, the depression of the melting temperature of Al nanolayer under plane stress condition, the threshold heating rate, 1011 K/s, for kinetic superheating, a large temperature drop in a 5 nm collision region of the two solid-melt interfaces, and a strong effect of geometry on kinetic superheating in Al core-shell nanostructure at high heating rate.

  13. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter.

    PubMed

    Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei

    2013-08-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200

  14. Experimentally validated multiphysics computational model of focusing and shock wave formation in an electromagnetic lithotripter

    PubMed Central

    Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei

    2013-01-01

    A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200

  15. Research on Structural Safety of the Stratospheric Airship Based on Multi-Physics Coupling Calculation

    NASA Astrophysics Data System (ADS)

    Ma, Z.; Hou, Z.; Zang, X.

    2015-09-01

    As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.

  16. Data-driven prognosis: a multi-physics approach verified via balloon burst experiment

    PubMed Central

    Chandra, Abhijit; Kar, Oliva

    2015-01-01

    A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or ‘training’ for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons.

  17. Multiscale Multiphysics Caprock Seal Analysis: A Case Study of the Farnsworth Unit, Texas, USA

    NASA Astrophysics Data System (ADS)

    Heath, J. E.; Dewers, T. A.; Mozley, P.

    2015-12-01

    Caprock sealing behavior depends on coupled processes that operate over a variety of length and time scales. Capillary sealing behavior depends on nanoscale pore throats and interfacial fluid properties. Larger-scale sedimentary architecture, fractures, and faults may govern properties of potential "seal-bypass" systems. We present the multiscale multiphysics investigation of sealing integrity of the caprock system that overlies the Morrow Sandstone reservoir, Farnsworth Unit, Texas. The Morrow Sandstone is the target injection unit for an on-going combined enhanced oil recovery-CO2 storage project by the Southwest Regional Partnership on Carbon Sequestration (SWP). Methods include small-to-large scale measurement techniques, including: focused ion beam-scanning electron microscopy; laser scanning confocal microscopy; electron and optical petrography; core examinations of sedimentary architecture and fractures; geomechanical testing; and a noble gas profile through sealing lithologies into the reservoir, as preserved from fresh core. The combined data set is used as part of a performance assessment methodology. The authors gratefully acknowledge the U.S. Department of Energy's (DOE) National Energy Technology Laboratory for sponsoring this project through the SWP under Award No. DE-FC26-05NT42591. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. A novel medical image data-based multi-physics simulation platform for computational life sciences.

    PubMed

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-04-01

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models. PMID:24427518

  19. Multi-physical model of cation and water transport in ionic polymer-metal composite sensors

    NASA Astrophysics Data System (ADS)

    Zhu, Zicai; Chang, Longfei; Horiuchi, Tetsuya; Takagi, Kentaro; Aabloo, Alvo; Asaka, Kinji

    2016-03-01

    Ion-migration based electrical potential widely exists not only in natural systems but also in ionic polymer materials. We presented a multi-physical model and investigated the transport process of cation and water of ionic polymer-metal composites based on our thorough understanding on the ionic sensing mechanisms in this paper. The whole transport process was depicted by transport equations concerning convection flux under the total pressure gradient, electrical migration by the built-in electrical field, and the inter-coupling effect between cation and water. With numerical analysis, the influence of critical material parameters, the elastic modulus Ewet, the hydraulic permeability coefficient K, the diffusion coefficient of cation dII and water dWW, and the drag coefficient of water ndW, on the distribution of cation and water was investigated. It was obtained how these parameters correlate to the voltage characteristics (both magnitude and response speed) under a step bending. Additionally, it was found that the effective relative dielectric constant ɛr has little influence on the voltage but is positively correlated to the current. With a series of optimized parameters, the predicted voltage agreed with the experimental results well, which validated our model. Based on our physical model, it was suggested that an ionic polymer sensor can benefit from a higher modulus Ewet, a higher coefficient K and a lower coefficient dII, and a higher constant ɛr.

  20. Investigation of hemodynamics during cardiopulmonary bypass: A multiscale multiphysics fluid-structure-interaction study.

    PubMed

    Neidlin, Michael; Sonntag, Simon J; Schmitz-Rode, Thomas; Steinseifer, Ulrich; Kaufmann, Tim A S

    2016-04-01

    Neurological complications often occur during cardiopulmonary bypass (CPB). Hypoperfusion of brain tissue due to diminished cerebral autoregulation (CA) and thromboembolism from atherosclerotic plaque reduce the cerebral oxygen supply and increase the risk of perioperative stroke. To improve the outcome of cardiac surgeries, patient-specific computational fluid dynamic (CFD) models can be used to investigate the blood flow during CPB. In this study, we establish a computational model of CPB which includes cerebral autoregulation and movement of aortic walls on the basis of in vivo measurements. First, the Baroreflex mechanism, which plays a leading role in CA, is represented with a 0-D control circuit and coupled to the 3-D domain with differential equations as boundary conditions. Additionally a two-way coupled fluid-structure interaction (FSI) model with CA is set up. The wall shear stress (WSS) distribution is computed for the whole FSI domain and a comparison to rigid wall CFD is made. Constant flow and pulsatile flow CPB is considered. Rigid wall CFD delivers higher wall shear stress values than FSI simulations, especially during pulsatile perfusion. The flow rates through the supraaortic vessels are almost not affected, if considered as percentages of total cannula output. The developed multiphysic multiscale framework allows deeper insights into the underlying mechanisms during CPB on a patient-specific basis. PMID:26908181

  1. Partitioned coupling strategies for multi-physically coupled radiative heat transfer problems

    NASA Astrophysics Data System (ADS)

    Wendt, Gunnar; Erbts, Patrick; Düster, Alexander

    2015-11-01

    This article aims to propose new aspects concerning a partitioned solution strategy for multi-physically coupled fields including the physics of thermal radiation. Particularly, we focus on the partitioned treatment of electro-thermo-mechanical problems with an additional fourth thermal radiation field. One of the main goals is to take advantage of the flexibility of the partitioned approach to enable combinations of different simulation software and solvers. Within the frame of this article, we limit ourselves to the case of nonlinear thermoelasticity at finite strains, using temperature-dependent material parameters. For the thermal radiation field, diffuse radiating surfaces and gray participating media are assumed. Moreover, we present a robust and fast partitioned coupling strategy for the fourth field problem. Stability and efficiency of the implicit coupling algorithm are improved drawing on several methods to stabilize and to accelerate the convergence. To conclude and to review the effectiveness and the advantages of the additional thermal radiation field several numerical examples are considered to study the proposed algorithm. In particular we focus on an industrial application, namely the electro-thermo-mechanical modeling of the field-assisted sintering technology.

  2. Multiphysics modeling of two-phase film boiling within porous corrosion deposits

    NASA Astrophysics Data System (ADS)

    Jin, Miaomiao; Short, Michael

    2016-07-01

    Porous corrosion deposits on nuclear fuel cladding, known as CRUD, can cause multiple operational problems in light water reactors (LWRs). CRUD can cause accelerated corrosion of the fuel cladding, increase radiation fields and hence greater exposure risk to plant workers once activated, and induce a downward axial power shift causing an imbalance in core power distribution. In order to facilitate a better understanding of CRUD's effects, such as localized high cladding surface temperatures related to accelerated corrosion rates, we describe an improved, fully-coupled, multiphysics model to simulate heat transfer, chemical reactions and transport, and two-phase fluid flow within these deposits. Our new model features a reformed assumption of 2D, two-phase film boiling within the CRUD, correcting earlier models' assumptions of single-phase coolant flow with wick boiling under high heat fluxes. This model helps to better explain observed experimental values of the effective CRUD thermal conductivity. Finally, we propose a more complete set of boiling regimes, or a more detailed mechanism, to explain recent CRUD deposition experiments by suggesting the new concept of double dryout specifically in thick porous media with boiling chimneys.

  3. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    SciTech Connect

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  4. A Three-Dimensional Multi-Mesh Lattice Boltzmann Model for Multiphysics Simulations

    NASA Astrophysics Data System (ADS)

    Hashemi, Amirreza; Eshraghi, Mohsen; Felicelli, Sergio

    2015-11-01

    The lattice Boltzmann method (LBM) is known as an attractive computational method for modeling fluid flow and, more recently, transport phenomena. As any numerical method, the computational cost of LBM simulations depends on the density of the computational grids. The cost of simulations can become enormous when multiple equations are solved in three dimensions. In this work, the development of a multi-block multi-grid LBM model is discussed for three-dimensional (3D) multiphysics simulations. In a system of multiple coupled equations with different length scales, a multi-block mesh with different grids for each model would enhance the computational efficiency and stability of the model. Embedded-type grids facilitate the transfer of information between lattices while allowing larger time steps. In addition, a non-uniform mesh is considered within each mode that allows mesh refinement within each physical model when required. The multi-mesh method was developed to solve for transport phenomena including fluid flow, mass and heat transfer. The huge memory demands of LBM simulations in 3D was significantly reduced using this scheme. Moreover, by reducing the number of lattice points, cost communication in parallel processing was largely decreased.

  5. Tailoring microfluidic systems for organ-like cell culture applications using multiphysics simulations

    NASA Astrophysics Data System (ADS)

    Hagmeyer, Britta; Schütte, Julia; Böttger, Jan; Gebhardt, Rolf; Stelzle, Martin

    2013-03-01

    Replacing animal testing with in vitro cocultures of human cells is a long-term goal in pre-clinical drug tests used to gain reliable insight into drug-induced cell toxicity. However, current state-of-the-art 2D or 3D cell cultures aiming at mimicking human organs in vitro still lack organ-like morphology and perfusion and thus organ-like functions. To this end, microfluidic systems enable construction of cell culture devices which can be designed to more closely resemble the smallest functional unit of organs. Multiphysics simulations represent a powerful tool to study the various relevant physical phenomena and their impact on functionality inside microfluidic structures. This is particularly useful as it allows for assessment of system functions already during the design stage prior to actual chip fabrication. In the HepaChip®, dielectrophoretic forces are used to assemble human hepatocytes and human endothelial cells in liver sinusoid-like structures. Numerical simulations of flow distribution, shear stress, electrical fields and heat dissipation inside the cell assembly chambers as well as surface wetting and surface tension effects during filling of the microchannel network supported the design of this human-liver-on-chip microfluidic system for cell culture applications. Based on the device design resulting thereof, a prototype chip was injection-moulded in COP (cyclic olefin polymer). Functional hepatocyte and endothelial cell cocultures were established inside the HepaChip® showing excellent metabolic and secretory performance.

  6. A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting

    NASA Astrophysics Data System (ADS)

    Košnik, N.; Guštin, A. Z.; Mavrič, B.; Šarler, B.

    2016-03-01

    Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology.

  7. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  8. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  9. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining

  10. Multi-initial-conditions and Multi-physics Ensembles in the Weather Research and Forecasting Model to Improve Coastal Stratocumulus Forecasts for Solar Power Integration

    NASA Astrophysics Data System (ADS)

    Yang, H.

    2015-12-01

    In coastal Southern California, variation in solar energy production is predominantly due to the presence of stratocumulus clouds (Sc), as they greatly attenuate surface solar irradiance and cover most distributed photovoltaic systems on summer mornings. Correct prediction of the spatial coverage and lifetime of coastal Sc is therefore vital to the accuracy of solar energy forecasts in California. In Weather Research and Forecasting (WRF) model simulations, underprediction of Sc inherent in the initial conditions directly leads to an underprediction of Sc in the resulting forecasts. Hence, preprocessing methods were developed to create initial conditions more consistent with observational data and reduce spin-up time requirements. Mathiesen et al. (2014) previously developed a cloud data assimilation system to force WRF initial conditions to contain cloud liquid water based on CIMSS GOES Sounder cloud cover. The Well-mixed Preprocessor and Cloud Data Assimilation (WEMPPDA) package merges an initial guess of cloud liquid water content obtained from mixed-layer theory with assimilated CIMSS GOES Sounder cloud cover to more accurately represent the spatial coverage of Sc at initialization. The extent of Sc inland penetration is often constrained topographically; therefore, the low inversion base height (IBH) bias in NAM initial conditions decreases Sc inland penetration. The Inversion Base Height (IBH) package perturbs the initial IBH by the difference between model IBH and the 12Z radiosonde measurement. The performance of these multi-initial-condition configurations was evaluated over June, 2013 against SolarAnywhere satellite-derived surface irradiance data. Four configurations were run: 1) NAM initial conditions, 2) RAP initial conditions, 3) WEMPPDA applied to NAM, and 4) IBH applied to NAM. Both preprocessing methods showed significant improvement in the prediction of both spatial coverage and lifetime of coastal Sc. The best performing configuration was then

  11. Variability of West African monsoon patterns generated by a WRF multi-physics ensemble

    NASA Astrophysics Data System (ADS)

    Klein, Cornelia; Heinzeller, Dominikus; Bliefernicht, Jan; Kunstmann, Harald

    2015-11-01

    The credibility of regional climate simulations over West Africa stands and falls with the ability to reproduce the West African monsoon (WAM) whose precipitation plays a pivotal role for people's livelihood. In this study, we simulate the WAM for the wet year 1999 with a 27-member multi-physics ensemble of the Weather Research and Forecasting (WRF) model. We investigate the inter-member differences in a process-based manner in order to extract generalizable information on the behavior of the tested cumulus (CU), microphysics (MP), and planetary boundary layer (PBL) schemes. Precipitation, temperature and atmospheric dynamics are analyzed in comparison to the Tropical Rainfall Measuring Mission (TRMM) rainfall estimates, the Global Precipitation Climatology Centre (GPCC) gridded gauge-analysis, the Global Historical Climatology Network (GHCN) gridded temperature product and the forcing data (ERA-Interim) to explore interdependencies of processes leading to a certain WAM regime. We find that MP and PBL schemes contribute most to the ensemble spread (147 mm month-1) for monsoon precipitation over the study region. Furthermore, PBL schemes have a strong influence on the movement of the WAM rainband because of their impact on the cloud fraction, that ranges from 8 to 20 % at 600 hPa during August. More low- and mid-level clouds result in less incoming radiation and a weaker monsoon. Ultimately, we identify the differing intensities of the moist Hadley-type meridional circulation that connects the monsoon winds to the Tropical Easterly Jet as the main source for inter-member differences. The ensemble spread of Sahel precipitation and associated dynamics for August 1999 is comparable to the observed inter-annual spread (1979-2010) between dry and wet years, emphasizing the strong potential impact of regional processes and the need for a careful selection of model parameterizations.

  12. Multiphysics Modeling of Microwave Heating of a Frozen Heterogeneous Meal Rotating on a Turntable.

    PubMed

    Pitchai, Krishnamoorthy; Chen, Jiajia; Birla, Sohan; Jones, David; Gonzalez, Ric; Subbiah, Jeyamkondan

    2015-12-01

    A 3-dimensional (3-D) multiphysics model was developed to understand the microwave heating process of a real heterogeneous food, multilayered frozen lasagna. Near-perfect 3-D geometries of food package and microwave oven were used. A multiphase porous media model combining the electromagnetic heat source with heat and mass transfer, and incorporating phase change of melting and evaporation was included in finite element model. Discrete rotation of food on the turntable was incorporated. The model simulated for 6 min of microwave cooking of a 450 g frozen lasagna kept at the center of the rotating turntable in a 1200 W domestic oven. Temperature-dependent dielectric and thermal properties of lasagna ingredients were measured and provided as inputs to the model. Simulated temperature profiles were compared with experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The total moisture loss in lasagna was predicted and compared with the experimental moisture loss during cooking. The simulated spatial temperature patterns predicted at the top layer was in good agreement with the corresponding patterns observed in thermal images. Predicted point temperature profiles at 6 different locations within the meal were compared with experimental temperature profiles and root mean square error (RMSE) values ranged from 6.6 to 20.0 °C. The predicted total moisture loss matched well with an RMSE value of 0.54 g. Different layers of food components showed considerably different heating performance. Food product developers can use this model for designing food products by understanding the effect of thickness and order of each layer, and material properties of each layer, and packaging shape on cooking performance. PMID:26556025

  13. Mesh-free data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism

    NASA Astrophysics Data System (ADS)

    Slattery, Stuart R.

    2016-02-01

    In this paper we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. These scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.

  14. Mesh-free data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism

    DOE PAGESBeta

    Slattery, Stuart R.

    2015-12-02

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  15. Mesh-free data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism

    SciTech Connect

    Slattery, Stuart R.

    2015-12-02

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.

  16. Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow

    PubMed Central

    Piebalgs, Andris

    2015-01-01

    Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction–diffusion–convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation. PMID:26655469

  17. Multiscale, multiphysics geomechanics for geodynamics applied to buckling instabilities in the middle of the Australian craton

    NASA Astrophysics Data System (ADS)

    Regenauer-Lieb, Klaus; Veveakis, Manolis; Poulet, Thomas; Paesold, Martin; Rosenbaum, Gideon; Weinberg, Roberto F.; Karrech, Ali

    2015-10-01

    We propose a new multi-physics, multi-scale Integrated Computational Materials Engineering framework for 'predictive' geodynamic simulations. A first multiscale application is presented that allows linking our existing advanced material characterization methods from nanoscale through laboratory-, field and geodynamic scales into a new rock simulation framework. The outcome of our example simulation is that the diachronous Australian intraplate orogenic events are found to be caused by one and the same process. This is the non-linear progression of a fundamental buckling instability of the Australian intraplate lithosphere subject to long-term compressive forces. We identify four major stages of the instability: (1) a long wavelength elasto-visco-plastic flexure of the lithosphere without localized failure (first 50 Myrs of loading); (2) an incipient thrust on the central hinge of the model (50-90 Myrs); (3) followed by a secondary and tertiary thrust (90-100 Myrs) 200 km away to either side of the central thrust; (4) a progression of subsidiary thrusts advancing towards the central thrust (? Myrs). The model is corroborated by multiscale observations which are: nano-micro CT analysis of deformed samples in the central thrust giving evidence of cavitation and creep fractures in the thrust; mm-cm size veins of melts (pseudotachylite) that are evidence of intermittent shear heating events in the thrust; and 1-10 km width of the thrust - known as the mylonitic Redbank shear zone - corresponding to the width of the steady state solution, where shear heating on the thrust exactly balances heat diffusion.

  18. Multiphysics model of a rat ventricular myocyte: A voltage-clamp study

    PubMed Central

    2012-01-01

    Background The objective of this study is to develop a comprehensive model of the electromechanical behavior of the rat ventricular myocyte to investigate the various factors influencing its contractile response. Methods Here, we couple a model of Ca2 + dynamics described in our previous work, with a well-known model of contractile mechanics developed by Rice, Wang, Bers and de Tombe to develop a composite multiphysics model of excitation-contraction coupling. This comprehensive cell model is studied under voltage clamp (VC) conditions, since it allows to focus our study on the elaborate Ca2 + signaling system that controls the contractile mechanism. Results We examine the role of various factors influencing cellular contractile response. In particular, direct factors such as the amount of activator Ca2 + available to trigger contraction and the type of mechanical load applied (resulting in isosarcometric, isometric or unloaded contraction) are investigated. We also study the impact of temperature (22 to 38°C) on myofilament contractile response. The critical role of myofilament Ca2 + sensitivity in modulating developed force is likewise studied, as is the indirect coupling of intracellular contractile mechanism with the plasma membrane via the Na + /Ca2 + exchanger (NCX). Finally, we demonstrate a key linear relationship between the rate of contraction and relaxation, which is shown here to be intrinsically coupled over the full range of physiological perturbations. Conclusions Extensive testing of the composite model elucidates the importance of various direct and indirect modulatory influences on cellular twitch response with wide agreement with measured data on all accounts. Thus, the model provides mechanistic insights into whole-cell responses to a wide variety of testing approaches used in studies of cardiac myofilament contractility that have appeared in the literature over the past several decades. PMID:23171697

  19. Osiris: A Modern, High-Performance, Coupled, Multi-Physics Code For Nuclear Reactor Core Analysis

    SciTech Connect

    Procassini, R J; Chand, K K; Clouse, C J; Ferencz, R M; Grandy, J M; Henshaw, W D; Kramer, K J; Parsons, I D

    2007-02-26

    To meet the simulation needs of the GNEP program, LLNL is leveraging a suite of high-performance codes to be used in the development of a multi-physics tool for modeling nuclear reactor cores. The Osiris code project, which began last summer, is employing modern computational science techniques in the development of the individual physics modules and the coupling framework. Initial development is focused on coupling thermal-hydraulics and neutral-particle transport, while later phases of the project will add thermal-structural mechanics and isotope depletion. Osiris will be applicable to the design of existing and future reactor systems through the use of first-principles, coupled physics models with fine-scale spatial resolution in three dimensions and fine-scale particle-energy resolution. Our intent is to replace an existing set of legacy, serial codes which require significant approximations and assumptions, with an integrated, coupled code that permits the design of a reactor core using a first-principles physics approach on a wide range of computing platforms, including the world's most powerful parallel computers. A key research activity of this effort deals with the efficient and scalable coupling of physics modules which utilize rather disparate mesh topologies. Our approach allows each code module to use a mesh topology and resolution that is optimal for the physics being solved, and employs a mesh-mapping and data-transfer module to effect the coupling. Additional research is planned in the area of scalable, parallel thermal-hydraulics, high-spatial-accuracy depletion and coupled-physics simulation using Monte Carlo transport.

  20. Accurate and high-resolution boundary conditions and flow fields in the first-class cabin of an MD-82 commercial airliner

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Wen, Jizhou; Chao, Jiangyue; Yin, Weiyou; Shen, Chen; Lai, Dayi; Lin, Chao-Hsin; Liu, Junjie; Sun, Hejiang; Chen, Qingyan

    2012-09-01

    Flow fields in commercial airliner cabins are crucial for creating a thermally comfortable and healthy cabin environment. Flow fields depend on the thermo-fluid boundary conditions at the diffusers, in addition to the cabin geometry and furnishing. To study the flow fields in cabins, this paper describes a procedure to obtain the cabin geometry, boundary conditions at the diffusers, and flow fields. This investigation used a laser tracking system and reverse engineering to generate a digital model of an MD-82 aircraft cabin. Even though the measuring error by the system was very small, approximations and assumptions were needed to reduce the workload and data size. The geometric model can also be easily used to calculate the space volume. A combination of hot-sphere anemometers (HSA) and ultrasonic anemometers (UA) were applied to obtain the velocity magnitude, velocity direction, and turbulence intensity at the diffusers. The measured results indicate that the flow boundary conditions in a real cabin were rather complex and the velocity magnitude, velocity direction, and turbulence intensity varied significantly from one slot opening to another. UAs were also applied to measure the three-dimensional air velocity at 20 Hz, which could also be used to determine the turbulence intensity. Due to the instability of the flow, it should at least be measured for 4 min to obtain accurate averaged velocity and turbulence information. It was found that the flow fields were of low speed and high turbulence intensity. This study provides high quality data for validating Computational Fluid Dynamics (CFD) models, including cabin geometry, boundary conditions of diffusers, and high-resolution flow field in the first-class cabin of a functional MD-82 commercial airliner.

  1. A Coupled Field Multiphysics Modeling Approach to Investigate RF MEMS Switch Failure Modes under Various Operational Conditions

    PubMed Central

    Sadek, Khaled; Lueke, Jonathan; Moussa, Walied

    2009-01-01

    In this paper, the reliability of capacitive shunt RF MEMS switches have been investigated using three dimensional (3D) coupled multiphysics finite element (FE) analysis. The coupled field analysis involved three consecutive multiphysics interactions. The first interaction is characterized as a two-way sequential electromagnetic (EM)-thermal field coupling. The second interaction represented a one-way sequential thermal-structural field coupling. The third interaction portrayed a two-way sequential structural-electrostatic field coupling. An automated substructuring algorithm was utilized to reduce the computational cost of the complicated coupled multiphysics FE analysis. The results of the substructured FE model with coupled field analysis is shown to be in good agreement with the outcome of previously published experimental and numerical studies. The current numerical results indicate that the pull-in voltage and the buckling temperature of the RF switch are functions of the microfabrication residual stress state, the switch operational frequency and the surrounding packaging temperature. Furthermore, the current results point out that by introducing proper mechanical approaches such as corrugated switches and through-holes in the switch membrane, it is possible to achieve reliable pull-in voltages, at various operating temperatures. The performed analysis also shows that by controlling the mean and gradient residual stresses, generated during microfabrication, in conjunction with the proposed mechanical approaches, the power handling capability of RF MEMS switches can be increased, at a wide range of operational frequencies. These design features of RF MEMS switches are of particular importance in applications where a high RF power (frequencies above 10 GHz) and large temperature variations are expected, such as in satellites and airplane condition monitoring. PMID:22408490

  2. A Coupled Field Multiphysics Modeling Approach to Investigate RF MEMS Switch Failure Modes under Various Operational Conditions.

    PubMed

    Sadek, Khaled; Lueke, Jonathan; Moussa, Walied

    2009-01-01

    In this paper, the reliability of capacitive shunt RF MEMS switches have been investigated using three dimensional (3D) coupled multiphysics finite element (FE) analysis. The coupled field analysis involved three consecutive multiphysics interactions. The first interaction is characterized as a two-way sequential electromagnetic (EM)-thermal field coupling. The second interaction represented a one-way sequential thermal-structural field coupling. The third interaction portrayed a two-way sequential structural-electrostatic field coupling. An automated substructuring algorithm was utilized to reduce the computational cost of the complicated coupled multiphysics FE analysis. The results of the substructured FE model with coupled field analysis is shown to be in good agreement with the outcome of previously published experimental and numerical studies. The current numerical results indicate that the pull-in voltage and the buckling temperature of the RF switch are functions of the microfabrication residual stress state, the switch operational frequency and the surrounding packaging temperature. Furthermore, the current results point out that by introducing proper mechanical approaches such as corrugated switches and through-holes in the switch membrane, it is possible to achieve reliable pull-in voltages, at various operating temperatures. The performed analysis also shows that by controlling the mean and gradient residual stresses, generated during microfabrication, in conjunction with the proposed mechanical approaches, the power handling capability of RF MEMS switches can be increased, at a wide range of operational frequencies. These design features of RF MEMS switches are of particular importance in applications where a high RF power (frequencies above 10 GHz) and large temperature variations are expected, such as in satellites and airplane condition monitoring. PMID:22408490

  3. Multiphysics simulations of nanoarchitectures and analysis of germanium core-shell anode nanostructure for lithium-ion energy storage applications

    NASA Astrophysics Data System (ADS)

    Clancy, T.; Rohan, J. F.

    2015-12-01

    This paper reports multiphysics simulations (COMSOL) of relatively low conductive cathode oxide materials in nanoarchitectures that operate within the appropriate potential range (cut-off voltage 2.5 V) at 3 times the C-rate of micron scale thin film materials while still accessing 90% of material. This paper also reports a novel anode fabrication of Ge sputtered on a Cu nanotube current collector for lithium-ion batteries. Ge on Cu nanotubes is shown to alleviate the effect of volume expansion, enhancing mechanical stability at the nanoscale and improved the electronic characteristics for increased rate capabilities.

  4. A mathematical model for predicting photo-induced voltage and photostriction of PLZT with coupled multi-physics fields and its application

    NASA Astrophysics Data System (ADS)

    Huang, J. H.; Wang, X. J.; Wang, J.

    2016-02-01

    The primary purpose of this paper is to propose a mathematical model of PLZT ceramic with coupled multi-physics fields, e.g. thermal, electric, mechanical and light field. To this end, the coupling relationships of multi-physics fields and the mechanism of some effects resulting in the photostrictive effect are analyzed theoretically, based on which a mathematical model considering coupled multi-physics fields is established. According to the analysis and experimental results, the mathematical model can explain the hysteresis phenomenon and the variation trend of the photo-induced voltage very well and is in agreement with the experimental curves. In addition, the PLZT bimorph is applied as an energy transducer for a photovoltaic-electrostatic hybrid actuated micromirror, and the relation of the rotation angle and the photo-induced voltage is discussed based on the novel photostrictive mathematical model.

  5. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement

    PubMed Central

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive

  6. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement.

    PubMed

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)², which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive

  7. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  8. Transient multi-physics analysis of a magnetorheological shock absorber with the inverse Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong

    2015-10-01

    This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.

  9. Interface COMSOL-PHREEQC (iCP), an efficient numerical framework for the solution of coupled multiphysics and geochemistry

    NASA Astrophysics Data System (ADS)

    Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge

    2014-08-01

    This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.

  10. Development of a multi-physics calculation platform dedicated to irradiation devices in a material testing reactor

    SciTech Connect

    Bonaccorsi, T.; Di Salvo, J.; Aggery, A.; D'Aletto, C.; Doederlein, C.; Sireta, P.; Willermoz, G.; Daniel, M.

    2006-07-01

    The physical phenomena involved in irradiation devices within material testing reactors are complex (neutron and photon interactions, nuclear heating, thermal hydraulics, ...). However, the simulation of these phenomena requires a high precision in order to control the condition of the experiment and the development of predictive models. Until now, physicists use different tools with several approximations at each interface. The aim of this work is to develop a calculation platform dedicated to numerical multi-physics simulations of irradiation devices in the future European Jules Horowitz Reactor [1], This platform is based on a multi-physics data model which describes geometries, materials and state parameters associated with a sequence of thematic (neutronics, thermal hydraulics...) computations of these devices. Once the computation is carried out, the results can be returned to the data model (DM). The DM is encapsulated in a dedicated module of the SALOME platform [2] and exchanges data with SALOME native modules. This method allows a parametric description of a study, independent of the code used to perform the simulation. The application proposed in this paper concerns neutronic calculation of a fuel irradiation device with the new method of characteristics implemented in the APOLLO2 code [3]. The device is located at the periphery of the OSIRIS core. This choice is motivated by the possibility to compare the calculation with experimental results, which cannot be done for the Jules Horowitz Reactor, currently in design study phase. (authors)

  11. Multi-physics design and analyses of long life reactors for lunar outposts

    NASA Astrophysics Data System (ADS)

    Schriener, Timothy M.

    event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete

  12. Multi-physics design and analyses of long life reactors for lunar outposts

    NASA Astrophysics Data System (ADS)

    Schriener, Timothy M.

    event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete

  13. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  14. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  15. A finite element technique for accurate determination of interfacial adhesion force in MEMS using electrostatic actuation

    NASA Astrophysics Data System (ADS)

    Shavezipur, M.; Li, G. H.; Laboriante, I.; Gou, W. J.; Carraro, C.; Maboudian, R.

    2011-11-01

    This paper reports on accurate analysis of adhesion force between polysilicon-polysilicon surfaces in micro-/nanoelectromechanical systems (M/NEMS). The measurement is carried out using double-clamped beams. Electrostatic actuation and structural restoring force are exploited to respectively initiate and terminate the contact between the two surfaces under investigation. The adhesion force is obtained by balancing the electrostatic and mechanical forces acting on the beam just before the separation of the two surfaces. Different finite element models are developed to simulate the coupled-field multiphysics problem. The effects of fringing field in the electrostatic domain and geometric nonlinearity and residual stress in the structural domain are taken into consideration. Moreover, the beam stiffness is directly obtained for the case of combined loading (electrostatic and adhesion). Therefore, the overall electrostatic and structural forces used to extract the actual adhesion force from measured data are determined with high accuracy leading to accurate values for the adhesion force. The finite element simulations presented in this paper are not limited to adhesion force measurement and can be used to design or characterize electrostatically actuated devices such as MEM tunable capacitors and micromirrors, RF switches and M/NEM relays.

  16. Multi-physics and multi-scale characterization of shale anisotropy

    NASA Astrophysics Data System (ADS)

    Sarout, J.; Nadri, D.; Delle Piane, C.; Esteban, L.; Dewhurst, D.; Clennell, M. B.

    2012-12-01

    Shales are the most abundant sedimentary rock type in the Earth's shallow crust. In the past decade or so, they have attracted increased attention from the petroleum industry as reservoirs, as well as more traditionally for their sealing capacity for hydrocarbon/CO2 traps or underground waste repositories. The effectiveness of both fundamental and applied shale research is currently limited by (i) the extreme variability of physical, mechanical and chemical properties observed for these rocks, and by (ii) the scarce data currently available. The variability in observed properties is poorly understood due to many factors that are often irrelevant for other sedimentary rocks. The relationships between these properties and the petrophysical measurements performed at the field and laboratory scales are not straightforward, translating to a scale dependency typical of shale behaviour. In addition, the complex and often anisotropic micro-/meso-structures of shales give rise to a directional dependency of some of the measured physical properties that are tensorial by nature such as permeability or elastic stiffness. Currently, fundamental understanding of the parameters controlling the directional and scale dependency of shale properties is far from complete. Selected results of a multi-physics laboratory investigation of the directional and scale dependency of some critical shale properties are reported. In particular, anisotropic features of shale micro-/meso-structures are related to the directional-dependency of elastic and fluid transport properties: - Micro-/meso-structure (μm to cm scale) characterization by electron microscopy and X-ray tomography; - Estimation of elastic anisotropy parameters on a single specimen using elastic wave propagation (cm scale); - Estimation of the permeability tensor using the steady-state method on orthogonal specimens (cm scale); - Estimation of the low-frequency diffusivity tensor using NMR method on orthogonal specimens (<

  17. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  18. Analysis of Thermal Field on Integrated LED Light Source Based on COMSOL Multi-physics Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Li, Jingsong; Yang, Qingxin; Niu, Pingjuan; Jin, Liang; Meng, Bo; Li, Yang; Xiao, Zhaoxia; Zhang, Xian

    This paper obtained the average integrated heat transfer coefficient for the thermal resistance of a classic of integrated LED light source and its cooling fin-root on the basis of thermal circuit method. Simulation analysis on its steady-state temperature field distribution using COMSOL Multi-physics finite element method was carried out. This method has high precision and intuitive simulation results. The iteration method of the Numerical Analysis is introduced into method for the first time. The results have significant promotion on the LED cast light structure optimization and the affection of reduced heat coupling on the light temperature distribution. The comparison between thermocouple experimental data and calculation results proved the correctness and validity of the proposed method. This experimental study plays a guiding role to thermal analysis and design of other integrated lights.

  19. Human heart conjugate cooling simulation: Unsteady thermo-fluid-stress analysis

    PubMed Central

    Abdoli, Abas; Dulikravich, George S.; Bajaj, Chandrajit; Stowe, David F.; Jahania, M. Salik

    2015-01-01

    The main objective of this work was to demonstrate computationally that realistic human hearts can be cooled much faster by performing conjugate heat transfer consisting of pumping a cold liquid through the cardiac chambers and major veins while keeping the heart submerged in cold gelatin filling a cooling container. The human heart geometry used for simulations was obtained from three-dimensional, high resolution MRI scans. Two fluid flow domains for the right (pulmonic) and left (systemic) heart circulations, and two solid domains for the heart tissue and gelatin solution were defined for multi-domain numerical simulation. Detailed unsteady temperature fields within the heart tissue were calculated during the conjugate cooling process. A linear thermoelasticity analysis was performed to assess the stresses applied on the heart due to the coolant fluid shear and normal forces and to examine the thermal stress caused by temperature variation inside the heart. It was demonstrated that a conjugate cooling effort with coolant temperature at +4°C is capable of reducing the average heart temperature from +37°C to +8°C in 25 minutes for cases in which the coolant was steadily pumped only through major heart inlet veins and cavities. PMID:25045006

  20. Modeling of plasma and thermo-fluid transport in hybrid welding

    NASA Astrophysics Data System (ADS)

    Ribic, Brandon D.

    Hybrid welding combines a laser beam and electrical arc in order to join metals within a single pass at welding speeds on the order of 1 m min -1. Neither autonomous laser nor arc welding can achieve the weld geometry obtained from hybrid welding for the same process parameters. Depending upon the process parameters, hybrid weld depth and width can each be on the order of 5 mm. The ability to produce a wide weld bead increases gap tolerance for square joints which can reduce machining costs and joint fitting difficulty. The weld geometry and fast welding speed of hybrid welding make it a good choice for application in ship, pipeline, and aerospace welding. Heat transfer and fluid flow influence weld metal mixing, cooling rates, and weld bead geometry. Cooling rate affects weld microstructure and subsequent weld mechanical properties. Fluid flow and heat transfer in the liquid weld pool are affected by laser and arc energy absorption. The laser and arc generate plasmas which can influence arc and laser energy absorption. Metal vapors introduced from the keyhole, a vapor filled cavity formed near the laser focal point, influence arc plasma light emission and energy absorption. However, hybrid welding plasma properties near the opening of the keyhole are not known nor is the influence of arc power and heat source separation understood. A sound understanding of these processes is important to consistently achieving sound weldments. By varying process parameters during welding, it is possible to better understand their influence on temperature profiles, weld metal mixing, cooling rates, and plasma properties. The current literature has shown that important process parameters for hybrid welding include: arc power, laser power, and heat source separation distance. However, their influence on weld temperatures, fluid flow, cooling rates, and plasma properties are not well understood. Modeling has shown to be a successful means of better understanding the influence of processes parameters on heat transfer, fluid flow, and plasma characteristics for arc and laser welding. However, numerical modeling of laser/GTA hybrid welding is just beginning. Arc and laser welding plasmas have been previously analyzed successfully using optical emission spectroscopy in order to better understand arc and laser plasma properties as a function of plasma radius. Variation of hybrid welding plasma properties with radial distance is not known. Since plasma properties can affect arc and laser energy absorption and weld integrity, a better understanding of the change in hybrid welding plasma properties as a function of plasma radius is important and necessary. Material composition influences welding plasma properties, arc and laser energy absorption, heat transfer, and fluid flow. The presence of surface active elements such as oxygen and sulfur can affect weld pool fluid flow and bead geometry depending upon the significance of heat transfer by convection. Easily vaporized and ionized alloying elements can influence arc plasma characteristics and arc energy absorption. The effects of surface active elements on heat transfer and fluid flow are well understood in the case of arc and conduction mode laser welding. However, the influence of surface active elements on heat transfer and fluid flow during keyhole mode laser welding and laser/arc hybrid welding are not well known. Modeling has been used to successfully analyze the influence of surface active elements during arc and conduction mode laser welding in the past and offers promise in the case of laser/arc hybrid welding. A critical review of the literature revealed several important areas for further research and unanswered questions. (1) The understanding of heat transfer and fluid flow during hybrid welding is still beginning and further research is necessary. (2) Why hybrid welding weld bead width is greater than that of laser or arc welding is not well understood. (3) The influence of arc power and heat source separation distance on cooling rates during hybrid welding are not known. (4) Convection during hybrid welding is not well understood despite its importance to weld integrity. (5) The influence of surface active elements on weld geometry, weld pool temperatures, and fluid flow during high power density laser and laser/arc hybrid welding are not known. (6) Although the arc power and heat source separation distance have been experimentally shown to influence arc stability and plasma light emission during hybrid welding, the influence of these parameters on plasma properties is unknown. (7) The electrical conductivity of hybrid welding plasmas is not known, despite its importance to arc stability and weld integrity. In this study, heat transfer and fluid flow are analyzed for laser, gas tungsten arc (GTA), and laser/GTA hybrid welding using an experimentally validated three dimensional phenomenological model. By evaluating arc and laser welding using similar process parameters, a better understanding of the hybrid welding process is expected. The role of arc power and heat source separation distance on weld depth, weld pool centerline cooling rates, and fluid flow profiles during CO2 laser/GTA hybrid welding of 321 stainless steel are analyzed. Laser power is varied for a constant heat source separation distance to evaluate its influence on weld temperatures, weld geometry, and fluid flow during Nd:YAG laser/GTA hybrid welding of A131 structural steel. The influence of oxygen and sulfur on keyhole and weld bead geometry, weld temperatures, and fluid flow are analyzed for high power density Yb doped fiber laser welding of (0.16 %C, 1.46 %Mn) mild steel. Optical emission spectroscopy was performed on GTA, Nd:YAG laser, and Nd:YAG laser/GTA hybrid welding plasmas for welding of 304L stainless steel. Emission spectroscopy provides a means of determining plasma temperatures and species densities using deconvoluted measured spectral intensities, which can then be used to calculate plasma electrical conductivity. In this study, hybrid welding plasma temperatures, species densities, and electrical conductivities were determined using various heat source separation distances and arc currents using an analytical method coupled calculated plasma compositions. As a result of these studies heat transfer by convection was determined to be dominant during hybrid welding of steels. The primary driving forces affecting hybrid welding fluid flow are the surface tension gradient and electromagnetic force. Fiber laser weld depth showed a negligible change when increasing the (0.16 %C, 1.46 %Mn) mild steel sulfur concentration from 0.006 wt% to 0.15 wt%. Increasing the dissolved oxygen content in weld pool from 0.0038 wt% to 0.0257 wt% increased the experimental weld depth from 9.3 mm to 10.8 mm. Calculated partial pressure of carbon monoxide increased from 0.1 atm to 0.75 atm with the 0.0219 wt% increase in dissolved oxygen in the weld metal and may explain the increase in weld depth. Nd:YAG laser/GTA hybrid welding plasma temperatures were calculated to be approximately between 7927 K and 9357 K. Increasing the Nd:YAG laser/GTA hybrid welding heat source separation distance from 4 mm to 6 mm reduced plasma temperatures between 500 K and 900 K. Hybrid welding plasma total electron densities and electrical conductivities were on the order of 1 x 1022 m-3 and 3000 S m-1, respectively.

  1. Transient Thermo-fluid Model of Meniscus Behavior and Slag Consumption in Steel Continuous Casting

    NASA Astrophysics Data System (ADS)

    Jonayat, A. S. M.; Thomas, Brian G.

    2014-10-01

    The behavior of the slag layer between the oscillating mold wall, the slag rim, the slag/liquid steel interface, and the solidifying steel shell, is of immense importance for the surface quality of continuous-cast steel. A computational model of the meniscus region has been developed, that includes transient heat transfer, multi-phase fluid flow, solidification of the slag, and movement of the mold during an oscillation cycle. First, the model is applied to a lab experiment done with a "mold simulator" to verify the transient temperature-field predictions. Next, the model is verified by matching with available literature and plant measurements of slag consumption. A reasonable agreement has been observed for both temperature and flow-field. The predictions show that transient temperature behavior depends on the location of the thermocouple during the oscillation relative to the meniscus. During an oscillation cycle, heat transfer variations in a laboratory frame of reference are more severe than experienced by the moving mold thermocouples, and the local heat transfer rate is increased greatly when steel overflows the meniscus. Finally, the model is applied to conduct a parametric study on the effect of casting speed, stroke, frequency, and modification ratio on slag consumption. Slag consumption per unit area increases with increase of stroke and modification ratio, and decreases with increase of casting speed while the relation with frequency is not straightforward. The match between model predictions and literature trends suggests that this methodology can be used for further investigations.

  2. Computational thermo-fluid dynamics contributions to advanced gas turbine engine design

    NASA Technical Reports Server (NTRS)

    Graham, R. W.; Adamczyk, J. J.; Rohlik, H. E.

    1984-01-01

    The design practices for the gas turbine are traced throughout history with particular emphasis on the calculational or analytical methods. Three principal components of the gas turbine engine will be considered: namely, the compressor, the combustor and the turbine.

  3. A multiscale thermo-fluid computational model for a two-phase cooling system

    NASA Astrophysics Data System (ADS)

    Sacco, Riccardo; Carichino, Lucia; de Falco, Carlo; Verri, Maurizio; Agostini, Francesco; Gradinger, Thomas

    2014-12-01

    In this paper, we describe a mathematical model and a numerical simulation method for the condenser component of a novel two-phase thermosyphon cooling system for power electronics applications. The condenser consists of a set of roll-bonded vertically mounted fins among which air flows by either natural or forced convection. In order to deepen the understanding of the mechanisms that determine the performance of the condenser and to facilitate the further optimization of its industrial design, a multiscale approach is developed to reduce as much as possible the complexity of the simulation code while maintaining reasonable predictive accuracy. To this end, heat diffusion in the fins and its convective transport in air are modeled as 2D processes while the flow of the two-phase coolant within the fins is modeled as a 1D network of pipes. For the numerical solution of the resulting equations, a Dual Mixed-Finite Volume scheme with Exponential Fitting stabilization is used for 2D heat diffusion and convection while a Primal Mixed Finite Element discretization method with upwind stabilization is used for the 1D coolant flow. The mathematical model and the numerical method are validated through extensive simulations of realistic device structures which prove to be in excellent agreement with available experimental data.

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. An optimized frequency-dependent multiphysics model for an ionic polymer-metal composite actuator with ethylene glycol as the solvent

    NASA Astrophysics Data System (ADS)

    Caponetto, R.; De Luca, V.; Graziani, S.; Sapuppo, F.

    2013-12-01

    IPMCs are electroactive polymers which can be used both as sensors and as actuators. The modeling of IPMC transducers is an open issue relevant to the development of effective applications. A multiphysics model of IPMC actuators is here implemented. It integrates the description of the electrical, mechanical, chemical and thermal coupled physics domains in a unique solution and, as a novelty, it allows the study in the frequency domain and the comparison with experimental response of the IPMC device. The IPMC white box modeling requires several macro- and microscopic parameters, not always accessible via theoretical approaches or experimentation. This work presents a new model optimization procedure which integrates the Nelder-Mead simplex method with the COMSOL Multiphysics®models. The proposed procedure uses experimental data and fits model simulations to IPMC real behavior for microscopic parameters’ identification. The model is developed for IPMCs with ethylene glycol as the solvent.

  10. Implementation of On-the-Fly Doppler Broadening in MCNP5 for Multiphysics Simulation of Nuclear Reactors

    SciTech Connect

    William Martin

    2012-11-16

    A new method to obtain Doppler broadened cross sections has been implemented into MCNP, removing the need to generate cross sections for isotopes at problem temperatures. Previous work had established the scientific feasibility of obtaining Doppler-broadened cross sections "on-the-fly" (OTF) during the random walk of the neutron. Thus, when a neutron of energy E enters a material region that is at some temperature T, the cross sections for that material at the exact temperature T are immediately obtained by interpolation using a high order functional expansion for the temperature dependence of the Doppler-broadened cross section for that isotope at the neutron energy E. A standalone Fortran code has been developed that generates the OTF library for any isotope that can be processed by NJOY. The OTF cross sections agree with the NJOY-based cross sections for all neutron energies and all temperatures in the range specified by the user, e.g., 250K - 3200K. The OTF methodology has been successfully implemented into the MCNP Monte Carlo code and has been tested on several test problems by comparing MCNP with conventional ACE cross sections versus MCNP with OTF cross sections. The test problems include the Doppler defect reactivity benchmark suite and two full-core VHTR configurations, including one with multiphysics coupling using RELAP5-3D/ATHENA for the thermal-hydraulic analysis. The comparison has been excellent, verifying that the OTF libraries can be used in place of the conventional ACE libraries generated at problem temperatures. In addition, it has been found that using OTF cross sections greatly reduces the complexity of the input for MCNP, especially for full-core temperature feedback calculations with many temperature regions. This results in an order of magnitude decrease in the number of input lines for full-core configurations, thus simplifying input preparation and reducing the potential for input errors. Finally, for full-core problems with multiphysics

  11. TerraFERMA: The Transparent Finite Element Rapid Model Assembler for multi-physics problems in the solid Earth sciences

    NASA Astrophysics Data System (ADS)

    Spiegelman, M. W.; Wilson, C. R.; Van Keken, P. E.

    2013-12-01

    We announce the release of a new software infrastructure, TerraFERMA, the Transparent Finite Element Rapid Model Assembler for the exploration and solution of coupled multi-physics problems. The design of TerraFERMA is driven by two overarching computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can often lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced and modified in a manner such that the best ideas in computation and earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high level problem description (FEniCS), composable solvers for coupled multi-physics problems (PETSc) and a science neutral options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an easier to use interface that organizes the scientific and computational choices required in a model into a single options file, from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible. TerraFERMA inherits much of its functionality from the underlying libraries. It currently solves partial differential equations (PDE) using finite element methods on simplicial meshes of triangles (2D) and tetrahedra (3D). The software is particularly well suited for non-linear problems with complex coupling between components. We demonstrate the design and utility of TerraFERMA through examples of thermal convection and magma dynamics. TerraFERMA has been tested successfully against over 45 benchmark problems from 7 publications in incompressible and compressible convection, magmatic solitary waves

  12. A coupled heat and mass transfer model of pure metal freezing using comsol multiphysics{trade mark, serif}

    NASA Astrophysics Data System (ADS)

    Pearce, J. V.

    2013-09-01

    The Comsol Multiphysics{trade mark, serif} finite element simulation package is employed to simulate the freezing of a zinc fixed point for standard platinum resistance thermometer (SPRT) calibrations. The liquid-solid interface is represented by the boundary of an adaptive mesh whose geometry adjusts itself to accommodate the propagating liquid-solid interface. This means that the temperature range of freezing can be arbitrarily narrow. The evolution of the mesh as a function of time is determined by the thermal conditions. The transport of heat and impurities, particularly at the liquid-solid interface, is modeled simultaneously and the concentration of impurities in the liquid volume is evaluated as a function of time and location. Because this is a coupled simulation the influence of impurity distribution on the liquid-solid interface temperature can be characterized. Some results of the model are presented against the background of impurity effects on the freezing curves of ITS-90 fixed points. In particular, the model is employed to demonstrate the dependence of the freezing curve shape with freezing rate, and that for low freezing rates the curve shape is well described by the Scheil theory of freezing. A new method of determining the endpoint of freezing of experimental data is shown and used to compare the model with measurements.

  13. SMITHERS: An object-oriented modular mapping methodology for MCNP-based neutronic–thermal hydraulic multiphysics

    SciTech Connect

    Richard, Joshua; Galloway, Jack; Fensin, Michael; Trellue, Holly

    2015-04-04

    A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from the combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.

  14. Multiphysics Modeling and Simulations of Mil A46100 Armor-Grade Martensitic Steel Gas Metal Arc Welding Process

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.; Montgomery, J. S.

    2013-10-01

    A multiphysics computational model has been developed for the conventional Gas Metal Arc Welding (GMAW) joining process and used to analyze butt-welding of MIL A46100, a prototypical high-hardness armor martensitic steel. The model consists of five distinct modules, each covering a specific aspect of the GMAW process, i.e., (a) dynamics of welding-gun behavior; (b) heat transfer from the electric arc and mass transfer from the electrode to the weld; (c) development of thermal and mechanical fields during the GMAW process; (d) the associated evolution and spatial distribution of the material microstructure throughout the weld region; and (e) the final spatial distribution of the as-welded material properties. To make the newly developed GMAW process model applicable to MIL A46100, the basic physical-metallurgy concepts and principles for this material have to be investigated and properly accounted for/modeled. The newly developed GMAW process model enables establishment of the relationship between the GMAW process parameters (e.g., open circuit voltage, welding current, electrode diameter, electrode-tip/weld distance, filler-metal feed speed, and gun travel speed), workpiece material chemistry, and the spatial distribution of as-welded material microstructure and properties. The predictions of the present GMAW model pertaining to the spatial distribution of the material microstructure and properties within the MIL A46100 weld region are found to be consistent with general expectations and prior observations.

  15. Multi-Physics Modeling of Molten Salt Transport in Solid Oxide Membrane (SOM) Electrolysis and Recycling of Magnesium

    SciTech Connect

    Powell, Adam; Pati, Soobhankar

    2012-03-11

    Solid Oxide Membrane (SOM) Electrolysis is a new energy-efficient zero-emissions process for producing high-purity magnesium and high-purity oxygen directly from industrial-grade MgO. SOM Recycling combines SOM electrolysis with electrorefining, continuously and efficiently producing high-purity magnesium from low-purity partially oxidized scrap. In both processes, electrolysis and/or electrorefining take place in the crucible, where raw material is continuously fed into the molten salt electrolyte, producing magnesium vapor at the cathode and oxygen at the inert anode inside the SOM. This paper describes a three-dimensional multi-physics finite-element model of ionic current, fluid flow driven by argon bubbling and thermal buoyancy, and heat and mass transport in the crucible. The model predicts the effects of stirring on the anode boundary layer and its time scale of formation, and the effect of natural convection at the outer wall. MOxST has developed this model as a tool for scale-up design of these closely-related processes.

  16. SMITHERS: An object-oriented modular mapping methodology for MCNP-based neutronic–thermal hydraulic multiphysics

    DOE PAGESBeta

    Richard, Joshua; Galloway, Jack; Fensin, Michael; Trellue, Holly

    2015-04-04

    A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from themore » combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.« less

  17. Multiphysics Engineering Analysis for an Integrated Design of ITER Diagnostic First Wall and Diagnostic Shield Module Design

    SciTech Connect

    Zhai, Y.; Loesser, G.; Smith, M.; Udintsev, V.; Giacomin, T., T.; Khodak, A.; Johnson, D,; Feder, R,

    2015-07-01

    ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses were performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.

  18. Uncertainties propagation in the framework of a Rod Ejection Accident modeling based on a multi-physics approach

    SciTech Connect

    Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C.

    2012-07-01

    The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)

  19. Numerical simulation and experimental validation of biofilm in a multi-physics framework using an SPH based method

    NASA Astrophysics Data System (ADS)

    Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike

    2016-06-01

    In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.

  20. Multiphysics modeling of CO2 sequestration in a faulted saline formation in Italy

    NASA Astrophysics Data System (ADS)

    Castelletto, Nicola; Teatini, Pietro; Gambolati, Giuseppe; Bossie-Codreanu, Dan; Vincké, Olivier; Daniel, Jean-Marc; Battistelli, Alfredo; Marcolini, Marica; Donda, Federica; Volpi, Valentina

    2013-12-01

    The present work describes the results of a modeling study addressing the geological sequestration of carbon dioxide (CO2) in an offshore multi-compartment reservoir located in Italy. The study is part of a large scale project aimed at implementing carbon capture and storage (CCS) technology in a power plant in Italy within the framework of the European Energy Programme for Recovery (EEPR). The processes modeled include multiphase flow and geomechanical effects occurring in the storage formation and the sealing layers, along with near wellbore effects, fault/thrust reactivation and land surface stability, for a CO2 injection rate of 1 × 106 ton/a. Based on an accurate reproduction of the three-dimensional geological setting of the selected structure, two scenarios are discussed depending on a different distribution of the petrophysical properties of the formation used for injection, namely porosity and permeability. The numerical results help clarify the importance of: (i) facies models at the reservoir scale, properly conditioned on wellbore logs, in assessing the CO2 storage capacity; (ii) coupled wellbore-reservoir flow in allocating injection fluxes among permeable levels; and (iii) geomechanical processes, especially shear failure, in constraining the sustainable pressure buildup of a faulted reservoir.

  1. Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.

  2. Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems

    SciTech Connect

    Turinsky, Paul

    2015-02-09

    This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial

  3. Keeping it Together: Advanced algorithms and software for magma dynamics (and other coupled multi-physics problems)

    NASA Astrophysics Data System (ADS)

    Spiegelman, M.; Wilson, C. R.

    2011-12-01

    A quantitative theory of magma production and transport is essential for understanding the dynamics of magmatic plate boundaries, intra-plate volcanism and the geochemical evolution of the planet. It also provides one of the most challenging computational problems in solid Earth science, as it requires consistent coupling of fluid and solid mechanics together with the thermodynamics of melting and reactive flows. Considerable work on these problems over the past two decades shows that small changes in assumptions of coupling (e.g. the relationship between melt fraction and solid rheology), can have profound changes on the behavior of these systems which in turn affects critical computational choices such as discretizations, solvers and preconditioners. To make progress in exploring and understanding this physically rich system requires a computational framework that allows more flexible, high-level description of multi-physics problems as well as increased flexibility in composing efficient algorithms for solution of the full non-linear coupled system. Fortunately, recent advances in available computational libraries and algorithms provide a platform for implementing such a framework. We present results from a new model building system that leverages functionality from both the FEniCS project (www.fenicsproject.org) and PETSc libraries (www.mcs.anl.gov/petsc) along with a model independent options system and gui, Spud (amcg.ese.ic.ac.uk/Spud). Key features from FEniCS include fully unstructured FEM with a wide range of elements; a high-level language (ufl) and code generation compiler (FFC) for describing the weak forms of residuals and automatic differentiation for calculation of exact and approximate jacobians. The overall strategy is to monitor/calculate residuals and jacobians for the entire non-linear system of equations within a global non-linear solve based on PETSc's SNES routines. PETSc already provides a wide range of solvers and preconditioners, from

  4. Multiphysics Simulations of the Complex 3D Geometry of the High Flux Isotope Reactor Fuel Elements Using COMSOL

    SciTech Connect

    Freels, James D; Jain, Prashant K

    2011-01-01

    A research and development project is ongoing to convert the currently operating High Flux Isotope Reactor (HFIR) of Oak Ridge National Laboratory (ORNL) from highly-enriched Uranium (HEU U3O8) fuel to low-enriched Uranium (LEU U-10Mo) fuel. Because LEU HFIR-specific testing and experiments will be limited, COMSOL is chosen to provide the needed multiphysics simulation capability to validate against the HEU design data and calculations, and predict the performance of the LEU fuel for design and safety analyses. The focus of this paper is on the unique issues associated with COMSOL modeling of the 3D geometry, meshing, and solution of the HFIR fuel plate and assembled fuel elements. Two parallel paths of 3D model development are underway. The first path follows the traditional route through examination of all flow and heat transfer details using the Low-Reynolds number k-e turbulence model provided by COMSOL v4.2. The second path simplifies the fluid channel modeling by taking advantage of the wealth of knowledge provided by decades of design and safety analyses, data from experiments and tests, and HFIR operation. By simplifying the fluid channel, a significant level of complexity and computer resource requirements are reduced, while also expanding the level and type of analysis that can be performed with COMSOL. Comparison and confirmation of validity of the first (detailed) and second (simplified) 3D modeling paths with each other, and with available data, will enable an expanded level of analysis. The detailed model will be used to analyze hot-spots and other micro fuel behavior events. The simplified model will be used to analyze events such as routine heat-up and expansion of the entire fuel element, and flow blockage. Preliminary, coarse-mesh model results of the detailed individual fuel plate are presented. Examples of the solution for an entire fuel element consisting of multiple individual fuel plates produced by the simplified model are also presented.

  5. Musculoskeletal Modeling of the Lumbar Spine to Explore Functional Interactions between Back Muscle Loads and Intervertebral Disk Multiphysics

    PubMed Central

    Toumanidou, Themis; Noailly, Jérôme

    2015-01-01

    During daily activities, complex biomechanical interactions influence the biophysical regulation of intervertebral disks (IVDs), and transfers of mechanical loads are largely controlled by the stabilizing action of spine muscles. Muscle and other internal forces cannot be easily measured directly in the lumbar spine. Hence, biomechanical models are important tools for the evaluation of the loads in those tissues involved in low-back disorders. Muscle force estimations in most musculoskeletal models mainly rely, however, on inverse calculations and static optimizations that limit the predictive power of the numerical calculations. In order to contribute to the development of predictive systems, we coupled a predictive muscle model with the passive resistance of the spine tissues, in a L3–S1 musculoskeletal finite element model with osmo-poromechanical IVD descriptions. The model included 46 fascicles of the major back muscles that act on the lower spine. The muscle model interacted with activity-related loads imposed to the osteoligamentous structure, as standing position and night rest were simulated through distributed upper body mass and free IVD swelling, respectively. Calculations led to intradiscal pressure values within ranges of values measured in vivo. Disk swelling led to muscle activation and muscle force distributions that seemed particularly appropriate to counterbalance the anterior body mass effect in standing. Our simulations pointed out a likely existence of a functional balance between stretch-induced muscle activation and IVD multiphysics toward improved mechanical stability of the lumbar spine understanding. This balance suggests that proper night rest contributes to mechanically strengthen the spine during day activity. PMID:26301218

  6. Analysis of scheme interrelationships for model calibration and improvement using the Noah land surface model with multi-physics options

    NASA Astrophysics Data System (ADS)

    Hong, S.; Park, S. K.; Choi, Y.; Myoung, B.

    2013-12-01

    As the importance of the land surface models (LSMs) has been increasingly magnified due to their pivotal role in the complete Earth environmental system, linking the atmosphere, hydrosphere, and biosphere, modeling accuracy at regional scales has been important to ensure better representations of increased land surface heterogeneities with the increase of spatial resolutions. However, every model has its own weaknesses induced by such problems as the reality of physical schemes by uncertain parameterizing methods and even structural unreality by simplified model designs. One of the major uncertainties is Interrelationships between implemented physical schemes and their impact on simulation accuracy. Using the new version of Noah land surface model with multi-physics option (Noah-MP) that enables to create various scheme combinations, we examined how each scheme in different scheme combinations contributes to better simulations and how their interrelationships vary with uncertain parameter changes. Targeting long term (5 year) monthly surface hydrology of Han River watershed in South Korea, we mainly explored the simulation accuracy of runoff and evapotranspiration, and additionally that of leaf area index in order to see the vegetation impact on surface water partitioning. The result indicates that the primary contributor for simulation accuracies were the schemes of surface heat exchange coefficient. These schemes are very sensitive to vegetation amount due to their different treatment of heat transfer between on bare and vegetated surface. Showing that further improvement through uncertain parameter calibration, this study also demonstrated that the combination of analyses of scheme interrelationships and parameter calibration promises improved model calibration. In addition, revealing remained uncertainty about the vegetation effect on surface energy and water partitioning, this study also showed that the scheme interrelationship analyses is useful for model

  7. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  8. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  9. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  10. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  11. A highly accurate interatomic potential for argon

    NASA Astrophysics Data System (ADS)

    Aziz, Ronald A.

    1993-09-01

    A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.

  12. Modelling dual-permeability hydrological system and slope stability of the Rocca Pitigliana landslide using COMSOL Multiphysics

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Bogaard, Thom; Bakker, Mark; Berti, Matteo

    2014-05-01

    The accuracy of using hydrological-slope stability models for rainfall-induced landslide forecasting relies on the identification of realistic landslide triggering mechanisms and the correct mathematical description of these mechanisms. The subsurface hydrological processes in a highly heterogeneous slope are controlled by complex geological conditions. Preferential flow through macropores, fractures and other local high-permeability zones can change the infiltration pattern, resulting in more rapid and deeper water movement. Preferential flow has significant impact on pore water pressure distribution and consequently on slope stability. Increasingly sophisticated theories and models have been developed to simulate preferential flow in various environmental systems. It is necessary to integrate methods of slope stability analysis with preferential flow models, such as dual-permeability models, to investigate the hydrological and soil mechanical response to precipitation in landslide areas. In this study, a systematic modeling approach is developed by using COMSOL Multiphysics to couple a single-permeability model and a dual-permeability model with a soil mechanical model for slope stability analysis. The dual-permeability model is composed of two Richards equations to describe coupled matrix and preferential flow, which can be used to quantify the influence of preferential flow on distribution and timing of pressure head in a slope. The hydrological models are coupled with a plane-strain elastic soil mechanics model and a local factor of safety method. The factor of safety is evaluated by applying the Mohr-Coulomb failure criterion on the effective stress field. The method is applied to the Rocca Pitigliana landslide located roughly 50 km south of Bologna. The landslide material consists of weathered clay with a thickness of 2-4m overlying clay-shale bedrock. Three years of field data of pore pressure measurements provide a reliable description of the dynamic

  13. Wind-Turbine Gear-Box Roller-Bearing Premature-Failure Caused by Grain-Boundary Hydrogen Embrittlement: A Multi-physics Computational Investigation

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Chenna, V.; Galgalikar, R.; Snipes, J. S.; Ramaswami, S.; Yavari, R.

    2014-11-01

    To help overcome the problem of horizontal-axis wind-turbine (HAWT) gear-box roller-bearing premature-failure, the root causes of this failure are currently being investigated using mainly laboratory and field-test experimental approaches. In the present work, an attempt is made to develop complementary computational methods and tools which can provide additional insight into the problem at hand (and do so with a substantially shorter turn-around time). Toward that end, a multi-physics computational framework has been developed which combines: (a) quantum-mechanical calculations of the grain-boundary hydrogen-embrittlement phenomenon and hydrogen bulk/grain-boundary diffusion (the two phenomena currently believed to be the main contributors to the roller-bearing premature-failure); (b) atomic-scale kinetic Monte Carlo-based calculations of the hydrogen-induced embrittling effect ahead of the advancing crack-tip; and (c) a finite-element analysis of the damage progression in, and the final failure of a prototypical HAWT gear-box roller-bearing inner raceway. Within this approach, the key quantities which must be calculated using each computational methodology are identified, as well as the quantities which must be exchanged between different computational analyses. The work demonstrates that the application of the present multi-physics computational framework enables prediction of the expected life of the most failure-prone HAWT gear-box bearing elements.

  14. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  15. Analysis of the PDFs of temperature from a multi-physics ensemble of climate change projections over the Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Jerez, Sonia; Montavez, Juan P.; Gomez-Navarro, Juan J.; Jimenez-Guerrero, Pedro; Lorente, Raquel; Garcia-Valero, Juan A.; Jimenez, Pedro A.; Gonzalez-Rouco, Jose F.; Zorita, Eduardo

    2010-05-01

    Regional climate change projections are affected by several sources of uncertainty. Some of them come from Global Circulation Models and scenarios.; others come from the downscaling process. In the case of dynamical downscaling, mainly using Regional Climate Models (RCM), the sources of uncertainty may involve nesting strategies, related to the domain position and resolution, soil characterization, internal variability, methods of solving the equations, and the configuration of model physics. Therefore, a probabilistic approach seems to be recommendable when projecting regional climate change. This problem is usually faced by performing an ensemble of simulations. The aim of this study is to evaluate the range of uncertainty in regional climate projections associated to changing the physical configuration in a RCM (MM5) as well as the capability when reproducing the observed climate. This study is performed over the Iberian Peninsula and focuses on the reproduction of the Probability Density Functions (PDFs) of daily mean temperature. The experiments consist on a multi-physics ensemble of high resolution climate simulations (30 km over the target region) for the periods 1970-1999 (present) and 2070-2099 (future). Two sets of simulations for the present have been performed using ERA40 (MM5-ERA40) and ECHAM5-3CM run1 (MM5-E5-PR) as boundary conditions. The future the experiments are driven by ECHAM5-A2-run1 (MM5-E5-A2). The ensemble has a total of eight members, as the result of combining the schemes for PBL (MRF and ETA), cumulus (GRELL and Kain-Fritch) and microphysics (Simple-Ice and Mixed phase). In a previous work this multi-physics ensemble has been analyzed focusing on the seasonal mean values of both temperature and precipitation. The main results indicate that those physics configurations that better reproduce the observed climate project the most dramatic changes for the future (i.e, the largest temperature increase and precipitation decrease). Among the

  16. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  17. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  18. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  19. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  20. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  1. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  2. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  3. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  4. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  5. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  6. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  7. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  8. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  9. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  10. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  11. Optimization and Parallelization of the Thermal-Hydraulic Sub-channel Code CTF for High-Fidelity Multi-physics Applications

    SciTech Connect

    Salko, Robert K; Schmidt, Rodney; Avramova, Maria N

    2014-01-01

    This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17

  12. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  13. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  14. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  15. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  16. Are Kohn-Sham conductances accurate?

    PubMed

    Mera, H; Niquet, Y M

    2010-11-19

    We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333

  17. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  18. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  19. Multi-Physics Feedback Simulations with Realistic Initial Conditions of the Formation of Star Clusters: From Large Scale Magnetized Clouds to Turbulent Clumps to Cores to Stars

    NASA Astrophysics Data System (ADS)

    Klein, R. I.; Li, P.; McKee, C. F.

    2015-10-01

    Multi-physics zoom-in adaptive mesh refinement simulations with feedback and realistic initial conditions, starting from large scale turbulent molecular clouds through the formation of clumps and cores to the formation os stellar clusters are presented. I give a summary of results at the different scales undergoing gravitational collapse from cloud to core to cluster formation. Detailed comparisons with observations are made at each stage of the simulations. In particular, properties of the magnetized clumps are compared with recent observations of Crutcher et al. 2010 and Crutcher 2012 and the magnetic field orientation in cloud clumps relative to the global mean field of the inter-cloud medium (Li et al. 2009). The Initial Mass Function (IMF) obtained is compared with the Chabrier IMF and the protostellar mass function of the cluster is compared with different theories.

  20. Development of a multiphysics analysis system for sodium-water reaction phenomena in steam generators of sodium-cooled fast reactors

    NASA Astrophysics Data System (ADS)

    Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki

    2015-12-01

    A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integrated into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.

  1. A high-fidelity multiphysics model for the new solid oxide iron-air redox battery. part I: Bridging mass transport and charge transfer with redox cycle kinetics

    NASA Astrophysics Data System (ADS)

    Jin, Xinfang; Zhao, Xuan; Huang, Kevin

    2015-04-01

    A high-fidelity two-dimensional axial symmetrical multi-physics model is described in this paper as an effort to simulate the cycle performance of a recently discovered solid oxide metal-air redox battery (SOMARB). The model collectively considers mass transport, charge transfer and chemical redox cycle kinetics occurring across the components of the battery, and is validated by experimental data obtained from independent research. In particular, the redox kinetics at the energy storage unit is well represented by Johnson-Mehl-Avrami-Kolmogorov (JMAK) and Shrinking Core models. The results explicitly show that the reduction of Fe3O4 during the charging cycle limits the overall performance. Distributions of electrode potential, overpotential, Nernst potential, and H2/H2O-concentration across various components of the battery are also systematically investigated.

  2. Integration of the DRAGON5/DONJON5 codes in the SALOME platform for performing multi-physics calculations in nuclear engineering

    NASA Astrophysics Data System (ADS)

    Hébert, Alain

    2014-06-01

    We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.

  3. Multi-physics simulation and fabrication of a compact 128 × 128 micro-electro-mechanical system Fabry-Perot cavity tunable filter array for infrared hyperspectral imager.

    PubMed

    Meng, Qinghua; Chen, Sihai; Lai, Jianjun; Huang, Ying; Sun, Zhenjun

    2015-08-01

    This paper demonstrates the design and fabrication of a 128×128 micro-electro-mechanical systems Fabry-Perot (F-P) cavity filter array, which can be applied for the hyperspectral imager. To obtain better mechanical performance of the filters, F-P cavity supporting structures are analyzed by multi-physics finite element modeling. The simulation results indicate that Z-arm is the key component of the structure. The F-P cavity array with Z-arm structures was also fabricated. The experimental results show excellent parallelism of the bridge deck, which agree with the simulation results. A conclusion is drawn that Z-arm supporting structures are important to hyperspectral imaging system, which can achieve a large tuning range and high fill factor compared to straight arm structures. The filter arrays have the potential to replace the traditional dispersive element. PMID:26368101

  4. Development of a multiphysics analysis system for sodium-water reaction phenomena in steam generators of sodium-cooled fast reactors

    SciTech Connect

    Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki

    2015-12-31

    A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integrated into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.

  5. A high-fidelity multiphysics model for the new solid oxide iron-air redox battery part I: Bridging mass transport and charge transfer with redox cycle kinetics

    SciTech Connect

    Jin, XF; Zhao, X; Huang, K

    2015-04-15

    A high-fidelity two-dimensional axial symmetrical multi-physics model is described in this paper as an effort to simulate the cycle performance of a recently discovered solid oxide metal-air redox battery (SOMARB). The model collectively considers mass transport, charge transfer and chemical redox cycle kinetics occurring across the components of the battery, and is validated by experimental data obtained from independent research. In particular, the redox kinetics at the energy storage unit is well represented by Johnson-Mehl-Avrami-Kolmogorov (JIVIAK) and Shrinking Core models. The results explicitly show that the reduction of Fe3O4 during the charging cycle limits the overall performance. Distributions of electrode potential, overpotential, Nernst potential, and H-2/H2O-concentration across various components of the battery are also systematically investigated. (C) 2015 Elsevier B.V. All rights reserved.

  6. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  7. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  8. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  9. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  10. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  11. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  12. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  13. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  14. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  15. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  16. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  17. The thermodynamic cost of accurate sensory adaptation

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2015-03-01

    Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.

  18. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  19. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  20. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293

  1. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  2. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  3. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  4. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  5. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164

  6. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  7. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  8. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  9. Nonstationary plasma-thermo-fluid dynamics and transition in processes of deep penetration laser beam-matter interaction

    NASA Astrophysics Data System (ADS)

    Golubev, Vladimir S.; Banishev, Alexander F.; Azharonok, V. V.; Zabelin, Alexandre M.

    1994-09-01

    A qualitative analysis of the role of some hydrodynamic flows and instabilities by the process of laser beam-metal sample deep penetration interaction is presented. The forces of vapor pressure, melt surface tension and thermocapillary forces can determined a number of oscillatory and nonstationary phenomena in keyhole and weld pool. Dynamics of keyhole formation in metal plates has been studied under laser beam pulse effect ((lambda) equals 1.06 micrometers ). Velocities of the keyhole bottom motion have been determined at 0.5 X 105 - 106 W/cm2 laser power densities. Oscillatory regime of plate break- down has been found out. Small-dimensional structures with d-(lambda) period was found on the frozen cavity walls, which, in our opinion, can contribute significantly to laser beam absorption. A new form of periodic structure on the frozen pattern being a helix-shaped modulation of the keyhole walls and bottom relief has been revealed. Temperature oscillations related to capillary oscillations in the melt layer were discovered in the cavity. Interaction of the CW CO2 laser beam and the matter by beam penetration into a moving metal sample has been studied. The pulsed and thermodynamic parameters of the surface plasma were investigated by optical and spectroscopic methods. The frequencies of plasma jets pulsations (in 10 - 105 Hz range) are related to possible melt surface instabilities of the keyhole.

  10. Thermo-fluid-dynamics of turbulent boundary layer over a moving continuous flat sheet in a parallel free stream

    NASA Astrophysics Data System (ADS)

    Afzal, Bushra; Noor Afzal Team; Bushra Afzal Team

    2014-11-01

    The momentum and thermal turbulent boundary layers over a continuous moving sheet subjected to a free stream have been analyzed in two layers (inner wall and outer wake) theory at large Reynolds number. The present work is based on open Reynolds equations of momentum and heat transfer without any closure model say, like eddy viscosity or mixing length etc. The matching of inner and outer layers has been carried out by Izakson-Millikan-Kolmogorov hypothesis. The matching for velocity and temperature profiles yields the logarithmic laws and power laws in overlap region of inner and outer layers, along with friction factor and heat transfer laws. The uniformly valid solution for velocity, Reynolds shear stress, temperature and thermal Reynolds heat flux have been proposed by introducing the outer wake functions due to momentum and thermal boundary layers. The comparison with experimental data for velocity profile, temperature profile, skin friction and heat transfer are presented. In outer non-linear layers, the lowest order momentum and thermal boundary layer equations have also been analyses by using eddy viscosity closure model, and results are compared with experimental data. Retired Professor, Embassy Hotel, Rasal Ganj, Aligarh 202001 India.

  11. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  12. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  13. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  14. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  15. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  16. How Accurate are SuperCOSMOS Positions?

    NASA Astrophysics Data System (ADS)

    Schaefer, Adam; Hunstead, Richard; Johnston, Helen

    2014-02-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of 0.16 arcsec in right ascension and declination. While overall systematic offsets were < 0.1 arcsec in each hemisphere, both the systematics and scatter were greater in the north.

  17. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  18. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  19. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  20. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  1. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  2. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  3. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  4. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962

  5. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  6. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  7. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  8. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  9. Bayesian estimation of magma supply, storage, and eruption rates using a multiphysical volcano model: Kīlauea Volcano, 2000–2012

    USGS Publications Warehouse

    Anderson, Kyle R.; Poland, Michael

    2016-01-01

    Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35–100% between 2001 and 2006 (from 0.11–0.17 to 0.18–0.28 km3/yr), before subsequently decreasing to 0.08–0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60–150% between

  10. Bayesian estimation of magma supply, storage, and eruption rates using a multiphysical volcano model: Kīlauea Volcano, 2000-2012

    NASA Astrophysics Data System (ADS)

    Anderson, Kyle R.; Poland, Michael P.

    2016-08-01

    Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35-100% between 2001 and 2006 (from 0.11-0.17 to 0.18-0.28 km3/yr), before subsequently decreasing to 0.08-0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60-150% between 2001 and

  11. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  12. An overset grid method for integration of fully 3D fluid dynamics and geophysics fluid dynamics models to simulate multiphysics coastal ocean flows

    NASA Astrophysics Data System (ADS)

    Tang, H. S.; Qu, K.; Wu, X. G.

    2014-09-01

    It is now becoming important to develop our capabilities to simulate coastal ocean flows involved with distinct physical phenomena occurring at a vast range of spatial and temporal scales. This paper presents a hybrid modeling system for such simulation. The system consists of a fully three dimensional (3D) fluid dynamics model and a geophysical fluid dynamics model, which couple with each other in two-way and march in time simultaneously. Particularly, in the hybrid system, the solver for incompressible flow on overset meshes (SIFOM) resolves fully 3D small-scale local flow phenomena, while the unstructured grid finite volume coastal ocean model (FVCOM) captures large-scale background flows. The integration of the two models are realized via domain decomposition implemented with an overset grid method. Numerical experiments on performance of the system in resolving flow patterns and solution convergence rate show that the SIFOM-FVCOM system works as intended, and its solutions compare reasonably with data obtained with measurements and other computational approaches. Its unparalleled capabilities to predict multiphysics and multiscale phenomena with high-fidelity are demonstrated by three typical applications that are beyond the reach of other currently existing models. It is anticipated that the SIFOM-FVCOM system will serve as a new platform to study many emerging coastal ocean problems.

  13. A multiscale and multiphysics model of strain development in a 1.5 T MRI magnet designed with 36 filament composite MgB2 superconducting wire

    NASA Astrophysics Data System (ADS)

    Amin, Abdullah Al; Baig, Tanvir; Deissler, Robert J.; Yao, Zhen; Tomsic, Michael; Doll, David; Akkus, Ozan; Martens, Michael

    2016-05-01

    High temperature superconductors such as MgB2 focus on conduction cooling of electromagnets that eliminates the use of liquid helium. With the recent advances in the strain sustainability of MgB2, a full body 1.5 T conduction cooled magnetic resonance imaging (MRI) magnet shows promise. In this article, a 36 filament MgB2 superconducting wire is considered for a 1.5 T full-body MRI system and is analyzed in terms of strain development. In order to facilitate analysis, this composite wire is homogenized and the orthotropic wire material properties are employed to solve for strain development using a 2D-axisymmetric finite element analysis (FEA) model of the entire set of MRI magnet. The entire multiscale multiphysics analysis is considered from the wire to the magnet bundles addressing winding, cooling and electromagnetic excitation. The FEA solution is verified with proven analytical equations and acceptable agreement is reported. The results show a maximum mechanical strain development of 0.06% that is within the failure criteria of -0.6% to 0.4% (-0.3% to 0.2% for design) for the 36 filament MgB2 wire. Therefore, the study indicates the safe operation of the conduction cooled MgB2 based MRI magnet as far as strain development is concerned.

  14. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  15. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  16. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  17. It's the parameters, stupid! Moving beyond multi-model and multi-physics approaches to characterize and reduce predictive uncertainty in process-based hydrological models

    NASA Astrophysics Data System (ADS)

    Clark, Martyn; Samaniego, Luis; Freer, Jim

    2014-05-01

    Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic

  18. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  19. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  20. Accurate wavelength calibration method for flat-field grating spectrometers.

    PubMed

    Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping

    2011-09-01

    A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865

  1. Analysis of a Cylindrical Specimen Heated by an Impinging Hot Hydrogen Jet

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Luong, Van; Foote, John; Litchford, Ron; Chen, Yen-Sen

    2006-01-01

    A computational conjugate heat transfer methodology was developed, as a first step towards an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber and components. A solid conduction heat transfer procedure was implemented onto a pressure-based, multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, and unstructured grid computational fluid dynamics formulation. The conjugate heat transfer of a cylindrical material specimen heated by an impinging hot hydrogen jet inside an enclosed test fixture was simulated and analyzed. The solid conduction heat transfer procedure was anchored with a standard solid heat transfer code. Transient analyses were then performed with ,variable thermal conductivities representing three composites of a material utilized as flow element in a legacy engine test. It was found that material thermal conductivity strongly influences the transient heat conduction characteristics. In addition, it was observed that high thermal gradient occur inside the cylindrical specimen during an impulsive or a 10 s ramp start sequence, but not during steady-state operations.

  2. Nonexposure accurate location K-anonymity algorithm in LBS.

    PubMed

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  3. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  4. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  5. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  6. Multiphysics Modeling for Dimensional Analysis of a Self-Heated Molten Regolith Electrolysis Reactor for Oxygen and Metals Production on the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Sibille, Laurent

    2010-01-01

    The technology of direct electrolysis of molten lunar regolith to produce oxygen and molten metal alloys has progressed greatly in the last few years. The development of long-lasting inert anodes and cathode designs as well as techniques for the removal of molten products from the reactor has been demonstrated. The containment of chemically aggressive oxide and metal melts is very difficult at the operating temperatures ca 1600 C. Containing the molten oxides in a regolith shell can solve this technical issue and can be achieved by designing a self-heating reactor in which the electrolytic currents generate enough Joule heat to create a molten bath. In a first phase, a thermal analysis model was built to study the formation of a melt of lunar basaltic regolith irradiated by a focused solar beam This mode of heating was selected because it relies on radiative heat transfer, which is the dominant mode of transfer of energy in melts at 1600 C. Knowing and setting the Gaussian-type heat flux from the concentrated solar beam and the phase and temperature dependent thermal properties, the model predicts the dimensions and temperature profile of the melt. A validation of the model is presented in this paper through the experimental formation of a spherical cap melt realized by others. The Orbitec/PSI experimental setup uses an 3.6-cm diameter concentrated solar beam to create a hemispheric melt in a bed of lunar regolith simulant contained in a large pot. Upon cooling, the dimensions of the vitrified melt are measured to validate the thermal model. In a second phase, the model is augmented by multiphysics components to compute the passage of electrical currents between electrodes inserted in the molten regolith. The current through the melt generates Joule heating due to the high resistivity of the medium and this energy is transferred into the melt by conduction, convection and primarily by radiation. The model faces challenges in two major areas, the change of phase as

  7. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  8. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  9. Must Kohn-Sham oscillator strengths be accurate at threshold?

    SciTech Connect

    Yang Zenghui; Burke, Kieron; Faassen, Meta van

    2009-09-21

    The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.

  10. Monitoring circuit accurately measures movement of solenoid valve

    NASA Technical Reports Server (NTRS)

    Gillett, J. D.

    1966-01-01

    Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.

  11. A Self-Instructional Device for Conditioning Accurate Prosody.

    ERIC Educational Resources Information Center

    Buiten, Roger; Lane, Harlan

    1965-01-01

    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  12. Second-order accurate nonoscillatory schemes for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  13. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  14. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  15. Accurate Measurements of the Local Deuterium Abundance from HST Spectra

    NASA Technical Reports Server (NTRS)

    Linsky, Jeffrey L.

    1996-01-01

    An accurate measurement of the primordial value of D/H would provide a critical test of nucleosynthesis models for the early universe and the baryon density. I briefly summarize the ongoing HST observations of the interstellar H and D Lyman-alpha absorption for lines of sight to nearby stars and comment on recent reports of extragalactic D/H measurements.

  16. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  17. Benchmarking accurate spectral phase retrieval of single attosecond pulses

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.

    2015-02-01

    A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.

  18. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  19. Device accurately measures and records low gas-flow rates

    NASA Technical Reports Server (NTRS)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  20. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  1. Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing

    SciTech Connect

    B. Olinger

    2005-07-01

    Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.

  2. Is a Writing Sample Necessary for "Accurate Placement"?

    ERIC Educational Resources Information Center

    Sullivan, Patrick; Nielsen, David

    2009-01-01

    The scholarship about assessment for placement is extensive and notoriously ambiguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "accurate placement"? Using a robust data sample of student assessment essays and ACCUPLACER test scores, we put this question to the…

  3. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    PubMed

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  4. Accurate momentum transfer cross section for the attractive Yukawa potential

    SciTech Connect

    Khrapak, S. A.

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  5. Instrument accurately measures small temperature changes on test surface

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.; Miller, H. B.

    1966-01-01

    Calorimeter apparatus accurately measures very small temperature rises on a test surface subjected to aerodynamic heating. A continuous thin sheet of a sensing material is attached to a base support plate through which a series of holes of known diameter have been drilled for attaching thermocouples to the material.

  6. On the importance of having accurate data for astrophysical modelling

    NASA Astrophysics Data System (ADS)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  7. DNA barcode data accurately assign higher spider taxa.

    PubMed

    Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the

  8. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  9. SUMOylation in Control of Accurate Chromosome Segregation during Mitosis

    PubMed Central

    Wan, Jun; Subramonian, Divya; Zhang, Xiang-Dong

    2012-01-01

    Posttranslational protein modification by small ubiquitin-related modifier (SUMO) has emerged as an important regulatory mechanism for chromosome segregation during mitosis. This review focuses on how SUMOylation regulates the centromere and kinetochore activities to achieve accurate chromosome segregation during mitosis. Kinetochores are assembled on the specialized chromatin domains called centromeres and serve as the sites for attaching spindle microtubule to segregate sister chromatids to daughter cells. Many proteins associated with mitotic centromeres and kinetochores have been recently found to be modified by SUMO. Although we are still at the early stage of elucidating how SUMOylation controls chromosome segregation during mitosis, a substantial progress has been achieved over the past decade. Furthermore, a major theme that has emerged from the recent studies of SUMOylation in mitosis is that both SUMO conjugation and deconjugation are critical for kinetochore assembly and disassembly. Lastly, we propose a model that SUMOylation coordinates multiple centromere and kinetochore activities to ensure accurate chromosome segregation. PMID:22812528

  10. Accurate and robust estimation of camera parameters using RANSAC

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He

    2013-03-01

    Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.

  11. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    SciTech Connect

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  12. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  13. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  14. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  15. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  16. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  17. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  18. Accurate nuclear radii and binding energies from a chiral interaction

    DOE PAGESBeta

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less

  19. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  20. Accurate parameter estimation for unbalanced three-phase system.

    PubMed

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056

  1. Accurate nuclear radii and binding energies from a chiral interaction

    SciTech Connect

    Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold

    2015-05-01

    With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  2. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  3. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  4. Multimodal spatial calibration for accurately registering EEG sensor positions.

    PubMed

    Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  5. Note-accurate audio segmentation based on MPEG-7

    NASA Astrophysics Data System (ADS)

    Wellhausen, Jens

    2003-12-01

    Segmenting audio data into the smallest musical components is the basis for many further meta data extraction algorithms. For example, an automatic music transcription system needs to know where the exact boundaries of each tone are. In this paper a note accurate audio segmentation algorithm based on MPEG-7 low level descriptors is introduced. For a reliable detection of different notes, both features in the time and the frequency domain are used. Because of this, polyphonic instrument mixes and even melodies characterized by human voices can be examined with this alogrithm. For testing and verification of the note accurate segmentation, a simple music transcription system was implemented. The dominant frequency within each segment is used to build a MIDI file representing the processed audio data.

  6. Accurate Method for Determining Adhesion of Cantilever Beams

    SciTech Connect

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  7. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  8. Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations

    NASA Astrophysics Data System (ADS)

    Ro, Stephen; Matzner, Christopher D.

    2013-08-01

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.

  9. SHOCK EMERGENCE IN SUPERNOVAE: LIMITING CASES AND ACCURATE APPROXIMATIONS

    SciTech Connect

    Ro, Stephen; Matzner, Christopher D.

    2013-08-10

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.

  10. A method for producing large, accurate, economical female molds

    SciTech Connect

    Guenter, A.; Guenter, B.

    1996-11-01

    A process in which lightweight, highly accurate, economical molds can be produced for prototype and low production runs of large parts for use in composites molding has been developed. This has been achieved by developing existing milling technology, using new materials and innovative material applications to CNC mill large female molds directly. Any step that can be eliminated in the mold building process translates into savings in tooling costs through reduced labor and material requirements.

  11. Accurate far-infrared rotational frequencies of carbon monoxide

    NASA Technical Reports Server (NTRS)

    Varberg, Thomas D.; Evenson, Kenneth M.

    1992-01-01

    This study presents high-resolution measurements of the pure rotational absorption spectrum of CO in its ground state for the range J arcsec - 5-37. A least-squares fit to this data set, augmented by previous microwave measurements of the J arcsec = 0-4 rotational transitions in the literature, determined accurate values for the molecular constants. A table of calculated CO rotational frequencies is provided for the range J arcsec = 0-45.

  12. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538

  13. Strategy Guideline. Accurate Heating and Cooling Load Calculations

    SciTech Connect

    Burdick, Arlan

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  14. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  15. Accurate equilibrium structures of fluoro- and chloroderivatives of methane

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Demaison, Jean; Rudolph, Heinz Dieter

    2014-11-01

    This work is a systematic study of molecular structure of fluoro-, chloro-, and fluorochloromethanes. For the first time, the accurate ab initio structure is computed for 10 molecules (CF4, CClF3, CCl2F2, CCl3F, CHClF2, CHCl2F, CH2F2, CH2ClF, CH2Cl2, and CCl4) at the coupled cluster level of electronic structure theory including single and double excitations augmented by a perturbational estimate of the effects of connected triple excitations [CCSD(T)] with all electrons being correlated and Gaussian basis sets of at least quadruple-ζ quality. Furthermore, when possible, namely for the molecules CH2F2, CH2Cl2, CH2ClF, CHClF2, and CCl2F2, accurate semi-experimental equilibrium (rSEe) structure has also been determined. This is achieved through a least-squares structural refinement procedure based on the equilibrium rotational constants of all available isotopomers, determined by correcting the experimental ground-state rotational constants with computed ab initio vibration-rotation interaction constants and electronic g-factors. The computed and semi-experimental equilibrium structures are in excellent agreement with each other, but the rSEe structure is generally more accurate, in particular for the CF and CCl bond lengths. The carbon-halogen bond length is discussed within the framework of the ligand close-packing model as a function of the atomic charges. For this purpose, the accurate equilibrium structures of some other molecules with alternative ligands, such as CH3Li, CF3CCH, and CF3CN, are also computed.

  16. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    NASA Technical Reports Server (NTRS)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  17. Accurate method for determining adhesion of cantilever beams

    SciTech Connect

    de Boer, M.P.; Michalske, T.A.

    1999-07-01

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying. {copyright} {ital 1999 American Institute of Physics.}

  18. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  19. Water wave model with accurate dispersion and vertical vorticity

    NASA Astrophysics Data System (ADS)

    Bokhove, Onno

    2010-05-01

    Cotter and Bokhove (Journal of Engineering Mathematics 2010) derived a variational water wave model with accurate dispersion and vertical vorticity. In one limit, it leads to Luke's variational principle for potential flow water waves. In the another limit it leads to the depth-averaged shallow water equations including vertical vorticity. Presently, focus will be put on the Hamiltonian formulation of the variational model and its boundary conditions.

  20. Strategy Guideline: Accurate Heating and Cooling Load Calculations

    SciTech Connect

    Burdick, A.

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  1. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  2. Accurate and occlusion-robust multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.

    2015-11-01

    This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.

  3. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  4. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  5. Accurate pose estimation using single marker single camera calibration system

    NASA Astrophysics Data System (ADS)

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  6. Accurate projector calibration method by using an optical coaxial camera.

    PubMed

    Huang, Shujun; Xie, Lili; Wang, Zhangying; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2015-02-01

    Digital light processing (DLP) projectors have been widely utilized to project digital structured-light patterns in 3D imaging systems. In order to obtain accurate 3D shape data, it is important to calibrate DLP projectors to obtain the internal parameters. The existing projector calibration methods have complicated procedures or low accuracy of the obtained parameters. This paper presents a novel method to accurately calibrate a DLP projector by using an optical coaxial camera. The optical coaxial geometry is realized by a plate beam splitter, so the DLP projector can be treated as a true inverse camera. A plate having discrete markers on the surface is used to calibrate the projector. The corresponding projector pixel coordinate of each marker on the plate is determined by projecting vertical and horizontal sinusoidal fringe patterns on the plate surface and calculating the absolute phase. The internal parameters of the DLP projector are obtained by the corresponding point pair between the projector pixel coordinate and the world coordinate of discrete markers. Experimental results show that the proposed method can accurately calibrate the internal parameters of a DLP projector. PMID:25967789

  7. Can blind persons accurately assess body size from the voice?

    PubMed

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  8. Accurate forced-choice recognition without awareness of memory retrieval.

    PubMed

    Voss, Joel L; Baym, Carol L; Paller, Ken A

    2008-06-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit memory. When memory for kaleidoscopes was tested using a two-alternative forced-choice recognition test with similar foils, recognition was enhanced by an attentional manipulation at encoding known to degrade explicit memory. Moreover, explicit recognition was most accurate when the awareness of retrieval was absent. These dissociations between accuracy and phenomenological features of explicit memory are consistent with the notion that correct responding resulted from experience-dependent enhancements of perceptual fluency with specific stimuli--the putative mechanism for perceptual priming effects in implicit memory tests. This mechanism may contribute to recognition performance in a variety of frequently-employed testing circumstances. Our results thus argue for a novel view of recognition, in that analyses of its neurocognitive foundations must take into account the potential for both (1) recognition mechanisms allied with implicit memory and (2) recognition mechanisms allied with explicit memory. PMID:18519546

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  10. Accurate thermochemistry for medium-sized and large molecules

    SciTech Connect

    Raghavachari, K.; Stefanov, B.B.; Curtiss, L.A.

    1997-12-31

    Accurate techniques such as Gaussian-2 (G2) theory have been proposed in recent years to evaluate the thermochemistry of small molecules from first-principles. However, as the molecules get larger, the errors in G2 theory and similar approaches tend to accumulate. For example, the computed heats of formation of benzene and naphthalene with G2 and G2(MP2) theories, respectively, have errors of 3.9 and 7.2 kcal/mol. In this work, we explore strategies for computing accurate heats of formation for medium-sized and large molecules. In our first scheme, G2 theory is combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. For a test set of 40 molecules composed of H, C, O, and N, our method yields enthalpies of formation, {Delta}H{sub f}{sup 0}(298 K), with a mean absolute deviation from experiment of only 0.5 kcal/mol. This is an improvement of a factor of three over the deviation of 1.5 kcal/mol seen in standard G2 theory.

  11. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, G.; Horton, K.A.; Elias, T.; Garbeil, H.; Mouginis-Mark, P. J.; Sutton, A.J.; Harris, A.J.L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Ki??lauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s-1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements. ?? Springer-Verlag 2006.

  12. Can clinicians accurately assess esophageal dilation without fluoroscopy?

    PubMed

    Bailey, A D; Goldner, F

    1990-01-01

    This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278

  13. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  14. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  15. A unique approach to accurately measure thickness in thick multilayers.

    PubMed

    Shi, Bing; Hiller, Jon M; Liu, Yuzi; Liu, Chian; Qian, Jun; Gades, Lisa; Wieczorek, Michael J; Marander, Albert T; Maser, Jorg; Assoufid, Lahsen

    2012-05-01

    X-ray optics called multilayer Laue lenses (MLLs) provide a promising path to focusing hard X-rays with high focusing efficiency at a resolution between 5 nm and 20 nm. MLLs consist of thousands of depth-graded thin layers. The thickness of each layer obeys the linear zone plate law. X-ray beamline tests have been performed on magnetron sputter-deposited WSi(2)/Si MLLs at the Advanced Photon Source/Center for Nanoscale Materials 26-ID nanoprobe beamline. However, it is still very challenging to accurately grow each layer at the designed thickness during deposition; errors introduced during thickness measurements of thousands of layers lead to inaccurate MLL structures. Here, a new metrology approach that can accurately measure thickness by introducing regular marks on the cross section of thousands of layers using a focused ion beam is reported. This new measurement method is compared with a previous method. More accurate results are obtained using the new measurement approach. PMID:22514179

  16. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  17. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  18. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.

    PubMed

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  19. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  20. An accurate analytic representation of the water pair potential.

    PubMed

    Cencek, Wojciech; Szalewicz, Krzysztof; Leforestier, Claude; van Harrevelt, Rob; van der Avoird, Ad

    2008-08-28

    The ab initio water dimer interaction energies obtained from coupled cluster calculations and used in the CC-pol water pair potential (Bukowski et al., Science, 2007, 315, 1249) have been refitted to a site-site form containing eight symmetry-independent sites in each monomer and denoted as CC-pol-8s. Initially, the site-site functions were assumed in a B-spline form, which allowed a precise optimization of the positions of the sites. Next, these functions were assumed in the standard exponential plus inverse powers form. The root mean square error of the CC-pol-8s fit with respect to the 2510 ab initio points is 0.10 kcal mol(-1), compared to 0.42 kcal mol(-1) of the CC-pol fit (0.010 kcal mol(-1) compared to 0.089 kcal mol(-1) for points with negative interaction energies). The energies of the stationary points in the CC-pol-8s potential are considerably more accurate than in the case of CC-pol. The water dimer vibration-rotation-tunneling spectrum predicted by the CC-pol-8s potential agrees substantially and systematically better with experiment than the already very accurate spectrum predicted by CC-pol, while specific features that could not be accurately predicted previously now agree very well with experiment. This shows that the uncertainties of the fit were the largest source of error in the previous predictions and that the present potential sets a new standard of accuracy in investigations of the water dimer. PMID:18688514

  1. D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data

    PubMed Central

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. PMID:26930054

  2. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  3. The importance of accurate convergence in addressing stereoscopic visual fatigue

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.

    2015-03-01

    Visual fatigue (asthenopia) continues to be a problem in extended viewing of stereoscopic imagery. Poorly converged imagery may contribute to this problem. In 2013, the Author reported that in a study sample a surprisingly high number of 3D feature films released as stereoscopic Blu-rays contained obvious convergence errors.1 The placement of stereoscopic image convergence can be an "artistic" call, but upon close examination, the sampled films seemed to have simply missed their intended convergence location. This failure maybe because some stereoscopic editing tools do not have the necessary fidelity to enable a 3D editor to obtain a high degree of image alignment or set an exact point of convergence. Compounding this matter further is the fact that a large number of stereoscopic editors may not believe that pixel accurate alignment and convergence is necessary. The Author asserts that setting a pixel accurate point of convergence on an object at the start of any given stereoscopic scene will improve the viewer's ability to fuse the left and right images quickly. The premise is that stereoscopic performance (acuity) increases when an accurately converged object is available in the image for the viewer to fuse immediately. Furthermore, this increased viewer stereoscopic performance should reduce the amount of visual fatigue associated with longer-term viewing because less mental effort will be required to perceive the imagery. To test this concept, we developed special stereoscopic imagery to measure viewer visual performance with and without specific objects for convergence. The Company Team conducted a series of visual tests with 24 participants between 25 and 60 years of age. This paper reports the results of these tests.

  4. Accurate evaluation of homogenous and nonhomogeneous gas emissivities

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Lee, K. P.

    1984-01-01

    Spectral transmittance and total band adsorptance of selected infrared bands of carbon dioxide and water vapor are calculated by using the line-by-line and quasi-random band models and these are compared with available experimental results to establish the validity of the quasi-random band model. Various wide-band model correlations are employed to calculate the total band absorptance and total emissivity of these two gases under homogeneous and nonhomogeneous conditions. These results are compared with available experimental results under identical conditions. From these comparisons, it is found that the quasi-random band model can provide quite accurate results and is quite suitable for most atmospheric applications.

  5. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  6. Accurate strain measurements in highly strained Ge microbridges

    NASA Astrophysics Data System (ADS)

    Gassenq, A.; Tardif, S.; Guilloy, K.; Osvaldo Dias, G.; Pauc, N.; Duchemin, I.; Rouchon, D.; Hartmann, J.-M.; Widiez, J.; Escalante, J.; Niquet, Y.-M.; Geiger, R.; Zabel, T.; Sigg, H.; Faist, J.; Chelnokov, A.; Rieutord, F.; Reboud, V.; Calvo, V.

    2016-06-01

    Ge under high strain is predicted to become a direct bandgap semiconductor. Very large deformations can be introduced using microbridge devices. However, at the microscale, strain values are commonly deduced from Raman spectroscopy using empirical linear models only established up to ɛ100 = 1.2% for uniaxial stress. In this work, we calibrate the Raman-strain relation at higher strain using synchrotron based microdiffraction. The Ge microbridges show unprecedented high tensile strain up to 4.9% corresponding to an unexpected Δω = 9.9 cm-1 Raman shift. We demonstrate experimentally and theoretically that the Raman strain relation is not linear and we provide a more accurate expression.

  7. Accurate analysis of EBSD data for phase identification

    NASA Astrophysics Data System (ADS)

    Palizdar, Y.; Cochrane, R. C.; Brydson, R.; Leary, R.; Scott, A. J.

    2010-07-01

    This paper aims to investigate the reliability of software default settings in the analysis of EBSD results. To study the effect of software settings on the EBSD results, the presence of different phases in high Al steel has been investigated by EBSD. The results show the importance of appropriate automated analysis parameters for valid and reliable phase discrimination. Specifically, the importance of the minimum number of indexed bands and the maximum solution error have been investigated with values of 7-9 and 1.0-1.5° respectively, found to be needed for accurate analysis.

  8. Accurately Determining the Risks of Rising Sea Level

    NASA Astrophysics Data System (ADS)

    Marbaix, Philippe; Nicholls, Robert J.

    2007-10-01

    With the highest density of people and the greatest concentration of economic activity located in the coastal regions, sea level rise is an important concern as the climate continues to warm. Subsequent flooding may potentially disrupt industries, populations, and livelihoods, particularly in the long term if the climate is not quickly stabilized [McGranahan et al., 2007; Tol et al., 2006]. To help policy makers understand these risks, a more accurate description of hazards posed by rising sea levels is needed at the global scale, even though the impacts in specific regions are better known.

  9. Beam Profile Monitor With Accurate Horizontal And Vertical Beam Profiles

    DOEpatents

    Havener, Charles C [Knoxville, TN; Al-Rejoub, Riad [Oak Ridge, TN

    2005-12-26

    A widely used scanner device that rotates a single helically shaped wire probe in and out of a particle beam at different beamline positions to give a pair of mutually perpendicular beam profiles is modified by the addition of a second wire probe. As a result, a pair of mutually perpendicular beam profiles is obtained at a first beamline position, and a second pair of mutually perpendicular beam profiles is obtained at a second beamline position. The simple modification not only provides more accurate beam profiles, but also provides a measurement of the beam divergence and quality in a single compact device.

  10. Vibration of clamped right triangular thin plates: Accurate simplified solutions

    NASA Astrophysics Data System (ADS)

    Saliba, H. T.

    1994-12-01

    Use of the superposition techniques in the free-vibration analyses of thin plates, as they were first introduced by Gorman, has provided simple and effective solutions to a vast number of rectangular plate problems. A modified superposition method is presented that is a noticeable improvement over existing techniques. It deals only with simple support conditions, leading to a simple, highly accurate, and very economical solution to the free-vibration problem of simply-supported right angle triangular plates. The modified method is also applicable to clamped-edge conditions.

  11. Accurate LTE abundances for some lambda Boo stars

    NASA Astrophysics Data System (ADS)

    Andrievsky, S. M.; Chernyshova, I. V.; Klochkova, V. G.; Panchuk, V. E.

    1998-04-01

    High-resolution and high S/N CCD spectra were analyzed to determine accurate LTE abundances in four lambda Boo stars: pi1 Ori, 29 Cyg, HR 8203 and 15 And. In general, 14 chemical elements were investigated. The main results are the following: all stars have a strong deficiency of the majority of investigated metals. Oxygen exhibits a moderate deficiency. The carbon abundance is close to the solar one. The results obtained support an accretion/diffusion model, which is currently adopted for the explanation of the lambda Boo phenomenon.

  12. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    SciTech Connect

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented.

  13. Detection and accurate localization of harmonic chipless tags

    NASA Astrophysics Data System (ADS)

    Dardari, Davide

    2015-12-01

    We investigate the detection and localization properties of harmonic tags working at microwave frequencies. A two-tone interrogation signal and a dedicated signal processing scheme at the receiver are proposed to eliminate phase ambiguities caused by the short signal wavelength and to provide accurate distance/position estimation even in the presence of clutter and multipath. The theoretical limits on tag detection and localization accuracy are investigated starting from a concise characterization of harmonic backscattered signals. Numerical results show that accuracies in the order of centimeters are feasible within an operational range of a few meters in the RFID UHF band.

  14. Accurate and Sensitive Peptide Identification with Mascot Percolator

    PubMed Central

    Brosch, Markus; Yu, Lu; Hubbard, Tim; Choudhary, Jyoti

    2009-01-01

    Sound scoring methods for sequence database search algorithms such as Mascot and Sequest are essential for sensitive and accurate peptide and protein identifications from proteomic tandem mass spectrometry data. In this paper, we present a software package that interfaces Mascot with Percolator, a well performing machine learning method for rescoring database search results, and demonstrate it to be amenable for both low and high accuracy mass spectrometry data, outperforming all available Mascot scoring schemes as well as providing reliable significance measures. Mascot Percolator can be readily used as a stand alone tool or integrated into existing data analysis pipelines. PMID:19338334

  15. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    PubMed

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  16. Accurate method of modeling cluster scaling relations in modified gravity

    NASA Astrophysics Data System (ADS)

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  17. Accurate energy levels for singly ionized platinum (Pt II)

    NASA Technical Reports Server (NTRS)

    Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.

    1988-01-01

    New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.

  18. Accurate dynamics in an azimuthally-symmetric accelerating cavity

    NASA Astrophysics Data System (ADS)

    Appleby, R. B.; Abell, D. T.

    2015-02-01

    We consider beam dynamics in azimuthally-symmetric accelerating cavities, using the EMMA FFAG cavity as an example. By fitting a vector potential to the field map, we represent the linear and non-linear dynamics using truncated power series and mixed-variable generating functions. The analysis provides an accurate model for particle trajectories in the cavity, reveals potentially significant and measurable effects on the dynamics, and shows differences between cavity focusing models. The approach provides a unified treatment of transverse and longitudinal motion, and facilitates detailed map-based studies of motion in complex machines like FFAGs.

  19. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    PubMed Central

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  20. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    PubMed

    Robinson, David

    2014-12-01

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218

  1. Highly Accurate Inverse Consistent Registration: A Robust Approach

    PubMed Central

    Reuter, Martin; Rosas, H. Diana; Fischl, Bruce

    2010-01-01

    The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289

  2. Strategy for accurate liver intervention by an optical tracking system

    PubMed Central

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Guan, Peifeng; Xiao, Weihu; Wu, Xiaoming

    2015-01-01

    Image-guided navigation for radiofrequency ablation of liver tumors requires the accurate guidance of needle insertion into a tumor target. The main challenge of image-guided navigation for radiofrequency ablation of liver tumors is the occurrence of liver deformations caused by respiratory motion. This study reports a strategy of real-time automatic registration to track custom fiducial markers glued onto the surface of a patient’s abdomen to find the respiratory phase, in which the static preoperative CT is performed. Custom fiducial markers are designed. Real-time automatic registration method consists of the automatic localization of custom fiducial markers in the patient and image spaces. The fiducial registration error is calculated in real time and indicates if the current respiratory phase corresponds to the phase of the static preoperative CT. To demonstrate the feasibility of the proposed strategy, a liver simulator is constructed and two volunteers are involved in the preliminary experiments. An ex-vivo porcine liver model is employed to further verify the strategy for liver intervention. Experimental results demonstrate that real-time automatic registration method is rapid, accurate, and feasible for capturing the respiratory phase from which the static preoperative CT anatomical model is generated by tracking the movement of the skin-adhered custom fiducial markers. PMID:26417501

  3. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  4. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  5. Accurate and efficient reconstruction of deep phylogenies from structured RNAs

    PubMed Central

    Stocsits, Roman R.; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F.

    2009-01-01

    Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687

  6. Accurate and efficient reconstruction of deep phylogenies from structured RNAs.

    PubMed

    Stocsits, Roman R; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F

    2009-10-01

    Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687

  7. Accurate phylogenetic classification of DNA fragments based onsequence composition

    SciTech Connect

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  8. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  9. Accurate perception of negative emotions predicts functional capacity in schizophrenia.

    PubMed

    Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J

    2014-04-30

    Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration. PMID:24524947

  10. Accurate Determination of Membrane Dynamics with Line-Scan FCS

    PubMed Central

    Ries, Jonas; Chiantia, Salvatore; Schwille, Petra

    2009-01-01

    Here we present an efficient implementation of line-scan fluorescence correlation spectroscopy (i.e., one-dimensional spatio-temporal image correlation spectroscopy) using a commercial laser scanning microscope, which allows the accurate measurement of diffusion coefficients and concentrations in biological lipid membranes within seconds. Line-scan fluorescence correlation spectroscopy is a calibration-free technique. Therefore, it is insensitive to optical artifacts, saturation, or incorrect positioning of the laser focus. In addition, it is virtually unaffected by photobleaching. Correction schemes for residual inhomogeneities and depletion of fluorophores due to photobleaching extend the applicability of line-scan fluorescence correlation spectroscopy to more demanding systems. This technique enabled us to measure accurate diffusion coefficients and partition coefficients of fluorescent lipids in phase-separating supported bilayers of three commonly used raft-mimicking compositions. Furthermore, we probed the temperature dependence of the diffusion coefficient in several model membranes, and in human embryonic kidney cell membranes not affected by temperature-induced optical aberrations. PMID:19254560

  11. Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations

    SciTech Connect

    Baglietto, Emilio

    2006-07-01

    An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)

  12. Exploring accurate Poisson–Boltzmann methods for biomolecular simulations

    PubMed Central

    Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray

    2013-01-01

    Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709

  13. Mouse models of human AML accurately predict chemotherapy response.

    PubMed

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W

    2009-04-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  14. An accurate model potential for alkali neon systems.

    PubMed

    Zanuttini, D; Jacquet, E; Giglio, E; Douady, J; Gervais, B

    2009-12-01

    We present a detailed investigation of the ground and lowest excited states of M-Ne dimers, for M=Li, Na, and K. We show that the potential energy curves of these Van der Waals dimers can be obtained accurately by considering the alkali neon systems as one-electron systems. Following previous authors, the model describes the evolution of the alkali valence electron in the combined potentials of the alkali and neon cores by means of core polarization pseudopotentials. The key parameter for an accurate model is the M(+)-Ne potential energy curve, which was obtained by means of ab initio CCSD(T) calculation using a large basis set. For each MNe dimer, a systematic comparison with ab initio computation of the potential energy curve for the X, A, and B states shows the remarkable accuracy of the model. The vibrational analysis and the comparison with existing experimental data strengthens this conclusion and allows for a precise assignment of the vibrational levels. PMID:19968334

  15. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  16. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  17. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  18. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGESBeta

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  19. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    SciTech Connect

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.

  20. Accurate optical CD profiler based on specialized finite element method

    NASA Astrophysics Data System (ADS)

    Carrero, Jesus; Perçin, Gökhan

    2012-03-01

    As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.

  1. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  2. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  3. Accurate 3D quantification of the bronchial parameters in MDCT

    NASA Astrophysics Data System (ADS)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  4. Accurate and Precise Zinc Isotope Ratio Measurements in Urban Aerosols

    NASA Astrophysics Data System (ADS)

    Weiss, D.; Gioia, S. M. C. L.; Coles, B.; Arnold, T.; Babinski, M.

    2009-04-01

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of δ66Zn determinations in aerosols is around 0.05 per mil per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in δ66Zn ranging between -0.96 and -0.37 per mil in coarse and between -1.04 and 0.02 per mil in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source.

  5. More-Accurate Model of Flows in Rocket Injectors

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  6. Novel dispersion tolerant interferometry method for accurate measurements of displacement

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.

    2015-05-01

    We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.

  7. Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina

    2014-06-01

    The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)

  8. Accurate measurements of dynamics and reproducibility in small genetic networks

    PubMed Central

    Dubuis, Julien O; Samanta, Reba; Gregor, Thomas

    2013-01-01

    Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845

  9. Quality metric for accurate overlay control in <20nm nodes

    NASA Astrophysics Data System (ADS)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  10. Accurate tremor locations from coherent S and P waves

    NASA Astrophysics Data System (ADS)

    Armbruster, John G.; Kim, Won-Young; Rubin, Allan M.

    2014-06-01

    Nonvolcanic tremor is an important component of the slow slip processes which load faults from below, but accurately locating tremor has proven difficult because tremor rarely contains clear P or S wave arrivals. Here we report the observation of coherence in the shear and compressional waves of tremor at widely separated stations which allows us to detect and accurately locate tremor events. An event detector using data from two stations sees the onset of tremor activity in the Cascadia tremor episodes of February 2003, July 2004, and September 2005 and confirms the previously reported south to north migration of the tremor. Event detectors using data from three and four stations give Sand P arrival times of high accuracy. The hypocenters of the tremor events fall at depths of ˜30 to ˜40 km and define a narrow plane dipping at a shallow angle to the northeast, consistent with the subducting plate interface. The S wave polarizations and P wave first motions define a source mechanism in agreement with the northeast convergence seen in geodetic observations of slow slip. Tens of thousands of locations determined by constraining the events to the plate interface show tremor sources highly clustered in space with a strongly similar pattern of sources in the three episodes examined. The deeper sources generate tremor in minor episodes as well. The extent to which the narrow bands of tremor sources overlap between the three major episodes suggests relative epicentral location errors as small as 1-2 km.

  11. Individual Differences in Accurately Judging Personality From Text.

    PubMed

    Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian

    2016-08-01

    This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. PMID:25720617

  12. A general, accurate procedure for calculating molecular interaction force.

    PubMed

    Yang, Pinghai; Qian, Xiaoping

    2009-09-15

    The determination of molecular interaction forces, e.g., van der Waals force, between macroscopic bodies is of fundamental importance for understanding sintering, adhesion and fracture processes. In this paper, we develop an accurate, general procedure for van der Waals force calculation. This approach extends a surface formulation that converts a six-dimensional (6D) volume integral into a 4D surface integral for the force calculation. It uses non-uniform rational B-spline (NURBS) surfaces to represent object surfaces. Surface integrals are then done on the parametric domain of the NURBS surfaces. It has combined advantages of NURBS surface representation and surface formulation, including (1) molecular interactions between arbitrary-shaped objects can be represented and evaluated by the NURBS model further common geometries such as spheres, cones, planes can be represented exactly and interaction forces are thus calculated accurately; (2) calculation efficiency is improved by converting the volume integral to the surface integral. This approach is implemented and validated via its comparison with analytical solutions for simple geometries. Calculation of van der Waals force between complex geometries with surface roughness is also demonstrated. A tutorial on the NURBS approach is given in Appendix A. PMID:19596335

  13. Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency

    NASA Astrophysics Data System (ADS)

    Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao

    2008-05-01

    Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.

  14. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    PubMed

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434

  15. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  16. Accurate camera calibration method specialized for virtual studios

    NASA Astrophysics Data System (ADS)

    Okubo, Hidehiko; Yamanouchi, Yuko; Mitsumine, Hideki; Fukaya, Takashi; Inoue, Seiki

    2008-02-01

    Virtual studio is a popular technology for TV programs, that makes possible to synchronize computer graphics (CG) to realshot image in camera motion. Normally, the geometrical matching accuracy between CG and realshot image is not expected so much on real-time system, we sometimes compromise on directions, not to come out the problem. So we developed the hybrid camera calibration method and CG generating system to achieve the accurate geometrical matching of CG and realshot on virtual studio. Our calibration method is intended for the camera system on platform and tripod with rotary encoder, that can measure pan/tilt angles. To solve the camera model and initial pose, we enhanced the bundle adjustment algorithm to fit the camera model, using pan/tilt data as known parameters, and optimizing all other parameters invariant against pan/tilt value. This initialization yields high accurate camera position and orientation consistent with any pan/tilt values. Also we created CG generator implemented the lens distortion function with GPU programming. By applying the lens distortion parameters obtained by camera calibration process, we could get fair compositing results.

  17. Simple and accurate optical height sensor for wafer inspection systems

    NASA Astrophysics Data System (ADS)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  18. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  19. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  20. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  1. Robust ODF smoothing for accurate estimation of fiber orientation.

    PubMed

    Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter

    2010-01-01

    Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202

  2. Learning fast accurate movements requires intact frontostriatal circuits

    PubMed Central

    Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro

    2013-01-01

    The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037

  3. A spectrally accurate algorithm for electromagnetic scattering in three dimensions

    NASA Astrophysics Data System (ADS)

    Ganesh, M.; Hawkins, S.

    2006-09-01

    In this work we develop, implement and analyze a high-order spectrally accurate algorithm for computation of the echo area, and monostatic and bistatic radar cross-section (RCS) of a three dimensional perfectly conducting obstacle through simulation of the time-harmonic electromagnetic waves scattered by the conductor. Our scheme is based on a modified boundary integral formulation (of the Maxwell equations) that is tolerant to basis functions that are not tangential on the conductor surface. We test our algorithm with extensive computational experiments using a variety of three dimensional perfect conductors described in spherical coordinates, including benchmark radar targets such as the metallic NASA almond and ogive. The monostatic RCS measurements for non-convex conductors require hundreds of incident waves (boundary conditions). We demonstrate that the monostatic RCS of small (to medium) sized conductors can be computed using over one thousand incident waves within a few minutes (to a few hours) of CPU time. We compare our results with those obtained using method of moments based industrial standard three dimensional electromagnetic codes CARLOS, CICERO, FE-IE, FERM, and FISC. Finally, we prove the spectrally accurate convergence of our algorithm for computing the surface current, far-field, and RCS values of a class of conductors described globally in spherical coordinates.

  4. Accurate eye center location through invariant isocentric patterns.

    PubMed

    Valenti, Roberto; Gevers, Theo

    2012-09-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery. PMID:22813958

  5. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  6. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    SciTech Connect

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the

  7. Microscale Investigation of Thermo-Fluid Transport in the Transition FIL, Region of an Evaporating Capillary Meniscus Using a Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Kihm, K. D.; Allen, J. S.; Hallinan, K. P.; Pratt, D. M.

    2004-01-01

    In order to enhance the fundamental understanding of thin film evaporation and thereby improve the critical design concept for two-phase heat transfer devices, microscale heat and mass transport is to be investigated for the transition film region using state-of-the-art optical diagnostic techniques. By utilizing a microgravity environment, the length scales of the transition film region can be extended sufficiently, from submicron to micron, to probe and measure the microscale transport fields which are affected by intermolecular forces. Extension of the thin film dimensions under microgravity will be achieved by using a conical evaporator made of a thin silicon substrate under which concentric and individually controlled micro-heaters are vapor-deposited to maintain either a constant surface temperature or a controlled temperature variation. Local heat transfer rates, required to maintain the desired wall temperature boundary condition, will be measured and recorded by the concentric thermoresistance heaters controlled by a Wheatstone bridge circuit, The proposed experiment employs a novel technique to maintain a constant liquid volume and liquid pressure in the capillary region of the evaporating meniscus so as to maintain quasi-stationary conditions during measurements on the transition film region. Alternating use of Fizeau interferometry via white and monochromatic light sources will measure the thin film slope and thickness variation, respectively. Molecular Fluorescence Tracking Velocimetry (MFTV), utilizing caged fluorophores of approximately 10-nm in size as seeding particles, will be used to measure the velocity profiles in the thin film region. An optical sectioning technique using confocal microscopy will allow submicron depthwise resolution for the velocity measurements within the film for thicknesses on the order of a few microns. Digital analysis of the fluorescence image-displacement PDFs, as described in the main proposal, can further enhance the depthwise resolution.

  8. Accurate and efficient spin integration for particle accelerators

    NASA Astrophysics Data System (ADS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  9. New Claus catalyst tests accurately reflect process conditions

    SciTech Connect

    Maglio, A.; Schubert, P.F.

    1988-09-12

    Methods for testing Claus catalysts are developed that more accurately represent the actual operating conditions in commercial sulfur recovery units. For measuring catalyst activity, an aging method has been developed that results in more meaningful activity data after the catalyst has been aged, because all catalysts undergo rapid initial deactivation in commercial units. An activity test method has been developed where catalysts can be compared at less than equilibrium conversion. A test has also been developed to characterize abrasion loss of Claus catalysts, in contrast to the traditional method of determining physical properties by measuring crush strengths. Test results from a wide range of materials correlated well with actual pneumatic conveyance attrition. Substantial differences in Claus catalyst properties were observed as a result of using these tests.

  10. A new generalized correlation for accurate vapor pressure prediction

    NASA Astrophysics Data System (ADS)

    An, Hui; Yang, Wenming

    2012-08-01

    An accurate knowledge of the vapor pressure of organic liquids is very important for the oil and gas processing operations. In combustion modeling, the accuracy of numerical predictions is also highly dependent on the fuel properties such as vapor pressure. In this Letter, a new generalized correlation is proposed based on the Lee-Kesler's method where a fuel dependent parameter 'A' is introduced. The proposed method only requires the input parameters of critical temperature, normal boiling temperature and the acentric factor of the fluid. With this method, vapor pressures have been calculated and compared with the data reported in data compilation for 42 organic liquids over 1366 data points, and the overall average absolute percentage deviation is only 1.95%.

  11. Accurate multiplex gene synthesis from programmable DNA microchips

    NASA Astrophysics Data System (ADS)

    Tian, Jingdong; Gong, Hui; Sheng, Nijing; Zhou, Xiaochuan; Gulari, Erdogan; Gao, Xiaolian; Church, George

    2004-12-01

    Testing the many hypotheses from genomics and systems biology experiments demands accurate and cost-effective gene and genome synthesis. Here we describe a microchip-based technology for multiplex gene synthesis. Pools of thousands of `construction' oligonucleotides and tagged complementary `selection' oligonucleotides are synthesized on photo-programmable microfluidic chips, released, amplified and selected by hybridization to reduce synthesis errors ninefold. A one-step polymerase assembly multiplexing reaction assembles these into multiple genes. This technology enabled us to synthesize all 21 genes that encode the proteins of the Escherichia coli 30S ribosomal subunit, and to optimize their translation efficiency in vitro through alteration of codon bias. This is a significant step towards the synthesis of ribosomes in vitro and should have utility for synthetic biology in general.

  12. Accurate Determination of Conformational Transitions in Oligomeric Membrane Proteins

    PubMed Central

    Sanz-Hernández, Máximo; Vostrikov, Vitaly V.; Veglia, Gianluigi; De Simone, Alfonso

    2016-01-01

    The structural dynamics governing collective motions in oligomeric membrane proteins play key roles in vital biomolecular processes at cellular membranes. In this study, we present a structural refinement approach that combines solid-state NMR experiments and molecular simulations to accurately describe concerted conformational transitions identifying the overall structural, dynamical, and topological states of oligomeric membrane proteins. The accuracy of the structural ensembles generated with this method is shown to reach the statistical error limit, and is further demonstrated by correctly reproducing orthogonal NMR data. We demonstrate the accuracy of this approach by characterising the pentameric state of phospholamban, a key player in the regulation of calcium uptake in the sarcoplasmic reticulum, and by probing its dynamical activation upon phosphorylation. Our results underline the importance of using an ensemble approach to characterise the conformational transitions that are often responsible for the biological function of oligomeric membrane protein states. PMID:26975211

  13. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  14. Accurate derivative evaluation for any Grad-Shafranov solver

    NASA Astrophysics Data System (ADS)

    Ricketson, L. F.; Cerfon, A. J.; Rachh, M.; Freidberg, J. P.

    2016-01-01

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  15. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    NASA Astrophysics Data System (ADS)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  16. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  17. Accurate localization of needle entry point in interventional MRI.

    PubMed

    Daanen, V; Coste, E; Sergent, G; Godart, F; Vasseur, C; Rousseau, J

    2000-10-01

    In interventional magnetic resonance imaging (MRI), the systems designed to help the surgeon during biopsy must provide accurate knowledge of the positions of the target and also the entry point of the needle on the skin of the patient. In some cases, this needle entry point can be outside the B(0) homogeneity area, where the distortions may be larger than a few millimeters. In that case, major correction for geometric deformation must be performed. Moreover, the use of markers to highlight the needle entry point is inaccurate. The aim of this study was to establish a three-dimensional coordinate correction according to the position of the entry point of the needle. We also describe a 2-degree of freedom electromechanical device that is used to determine the needle entry point on the patient's skin with a laser spot. PMID:11042649

  18. Accurate oscillator strengths for interstellar ultraviolet lines of Cl I

    NASA Technical Reports Server (NTRS)

    Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.

    1993-01-01

    Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.

  19. Accurate 12D dipole moment surfaces of ethylene

    NASA Astrophysics Data System (ADS)

    Delahaye, Thibault; Nikitin, Andrei V.; Rey, Michael; Szalay, Péter G.; Tyuterev, Vladimir G.

    2015-10-01

    Accurate ab initio full-dimensional dipole moment surfaces of ethylene are computed using coupled-cluster approach and its explicitly correlated counterpart CCSD(T)-F12 combined respectively with cc-pVQZ and cc-pVTZ-F12 basis sets. Their analytical representations are provided through 4th order normal mode expansions. First-principles prediction of the line intensities using variational method up to J = 30 are in excellent agreement with the experimental data in the range of 0-3200 cm-1. Errors of 0.25-6.75% in integrated intensities for fundamental bands are comparable with experimental uncertainties. Overall calculated C2H4 opacity in 600-3300 cm-1 range agrees with experimental determination better than to 0.5%.

  20. Accurate ab initio energy gradients in chemical compound space.

    PubMed

    Anatole von Lilienfeld, O

    2009-10-28

    Analytical potential energy derivatives, based on the Hellmann-Feynman theorem, are presented for any pair of isoelectronic compounds. Since energies are not necessarily monotonic functions between compounds, these derivatives can fail to predict the right trends of the effect of alchemical mutation. However, quantitative estimates without additional self-consistency calculations can be made when the Hellmann-Feynman derivative is multiplied with a linearization coefficient that is obtained from a reference pair of compounds. These results suggest that accurate predictions can be made regarding any molecule's energetic properties as long as energies and gradients of three other molecules have been provided. The linearization coefficent can be interpreted as a quantitative measure of chemical similarity. Presented numerical evidence includes predictions of electronic eigenvalues of saturated and aromatic molecular hydrocarbons. PMID:19894922