Multiphysics Thermal-Fluid Analysis of a Non-Nuclear Tester for Hot-Hydrogen Materials Development
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Foote, John; Litchford, Ron
2006-01-01
The objective of this effort is to analyze the thermal field of a non-nuclear tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, radiative and conjugate heat transfers.
Computational thermo-fluid analysis of a disk brake
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kuraishi, Takashi; Tabata, Shinichiro; Takagi, Hirokazu
2016-06-01
We present computational thermo-fluid analysis of a disk brake, including thermo-fluid analysis of the flow around the brake and heat conduction analysis of the disk. The computational challenges include proper representation of the small-scale thermo-fluid behavior, high-resolution representation of the thermo-fluid boundary layers near the spinning solid surfaces, and bringing the heat transfer coefficient (HTC) calculated in the thermo-fluid analysis of the flow to the heat conduction analysis of the spinning disk. The disk brake model used in the analysis closely represents the actual configuration, and this adds to the computational challenges. The components of the method we have developed for computational analysis of the class of problems with these types of challenges include the Space-Time Variational Multiscale method for coupled incompressible flow and thermal transport, ST Slip Interface method for high-resolution representation of the thermo-fluid boundary layers near spinning solid surfaces, and a set of projection methods for different parts of the disk to bring the HTC calculated in the thermo-fluid analysis. With the HTC coming from the thermo-fluid analysis of the flow around the brake, we do the heat conduction analysis of the disk, from the start of the breaking until the disk spinning stops, demonstrating how the method developed works in computational analysis of this complex and challenging problem.
Multiphysics Nuclear Thermal Rocket Thrust Chamber Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2005-01-01
The objective of this effort is t o develop an efficient and accurate thermo-fluid computational methodology to predict environments for hypothetical thrust chamber design and analysis. The current task scope is to perform multidimensional, multiphysics analysis of thrust performance and heat transfer analysis for a hypothetical solid-core, nuclear thermal engine including thrust chamber and nozzle. The multiphysics aspects of the model include: real fluid dynamics, chemical reactivity, turbulent flow, and conjugate heat transfer. The model will be designed to identify thermal, fluid, and hydrogen environments in all flow paths and materials. This model would then be used to perform non- nuclear reproduction of the flow element failures demonstrated in the Rover/NERVA testing, investigate performance of specific configurations and assess potential issues and enhancements. A two-pronged approach will be employed in this effort: a detailed analysis of a multi-channel, flow-element, and global modeling of the entire thrust chamber assembly with a porosity modeling technique. It is expected that the detailed analysis of a single flow element would provide detailed fluid, thermal, and hydrogen environments for stress analysis, while the global thrust chamber assembly analysis would promote understanding of the effects of hydrogen dissociation and heat transfer on thrust performance. These modeling activities will be validated as much as possible by testing performed by other related efforts.
Effects of physical properties on thermo-fluids cavitating flows
NASA Astrophysics Data System (ADS)
Chen, T. R.; Wang, G. Y.; Huang, B.; Li, D. Q.; Ma, X. J.; Li, X. L.
2015-12-01
The aims of this paper are to study the thermo-fluid cavitating flows and to evaluate the effects of physical properties on cavitation behaviours. The Favre-averaged Navier-Stokes equations with the energy equation are applied to numerically investigate the liquid nitrogen cavitating flows around a NASA hydrofoil. Meanwhile, the thermodynamic parameter Σ is used to assess the thermodynamic effects on cavitating flows. The results indicate that the thermodynamic effects on the thermo-fluid cavitating flows significantly affect the cavitation behaviours, including pressure and temperature distribution, the variation of physical properties, and cavity structures. The thermodynamic effects can be evaluated by physical properties under the same free-stream conditions. The global sensitivity analysis of liquid nitrogen suggests that ρv, Cl and L significantly influence temperature drop and cavity structure in the existing numerical framework, while pv plays the dominant role when these properties vary with temperature. The liquid viscosity μl slightly affects the flow structure via changing the Reynolds number Re equivalently, however, it hardly affects the temperature distribution.
Standardization of Thermo-Fluid Modeling in Modelica.Fluid
Franke, Rudiger; Casella, Francesco; Sielemann, Michael; Proelss, Katrin; Otter, Martin; Wetter, Michael
2009-09-01
This article discusses the Modelica.Fluid library that has been included in the Modelica Standard Library 3.1. Modelica.Fluid provides interfaces and basic components for the device-oriented modeling of onedimensional thermo-fluid flow in networks containing vessels, pipes, fluid machines, valves and fittings. A unique feature of Modelica.Fluid is that the component equations and the media models as well as pressure loss and heat transfer correlations are decoupled from each other. All components are implemented such that they can be used for media from the Modelica.Media library. This means that an incompressible or compressible medium, a single or a multiple substance medium with one or more phases might be used with one and the same model as long as the modeling assumptions made hold. Furthermore, trace substances are supported. Modeling assumptions can be configured globally in an outer System object. This covers in particular the initialization, uni- or bi-directional flow, and dynamic or steady-state formulation of mass, energy, and momentum balance. All assumptions can be locally refined for every component. While Modelica.Fluid contains a reasonable set of component models, the goal of the library is not to provide a comprehensive set of models, but rather to provide interfaces and best practices for the treatment of issues such as connector design and implementation of energy, mass and momentum balances. Applications from various domains are presented.
An Integrated Solution for Performing Thermo-fluid Conjugate Analysis
NASA Technical Reports Server (NTRS)
Kornberg, Oren
2009-01-01
A method has been developed which integrates a fluid flow analyzer and a thermal analyzer to produce both steady state and transient results of 1-D, 2-D, and 3-D analysis models. The Generalized Fluid System Simulation Program (GFSSP) is a one dimensional, general purpose fluid analysis code which computes pressures and flow distributions in complex fluid networks. The MSC Systems Improved Numerical Differencing Analyzer (MSC.SINDA) is a one dimensional general purpose thermal analyzer that solves network representations of thermal systems. Both GFSSP and MSC.SINDA have graphical user interfaces which are used to build the respective model and prepare it for analysis. The SINDA/GFSSP Conjugate Integrator (SGCI) is a formbase graphical integration program used to set input parameters for the conjugate analyses and run the models. The contents of this paper describes SGCI and its thermo-fluids conjugate analysis techniques and capabilities by presenting results from some example models including the cryogenic chill down of a copper pipe, a bar between two walls in a fluid stream, and a solid plate creating a phase change in a flowing fluid.
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Foote, John; Litchford, Ron
2006-01-01
The objective of this effort is to perform design analyses for a non-nuclear hot-hydrogen materials tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, and thermal radiative heat transfers. The goals of the design analyses are to maintain maximum hot-hydrogen jet impingement energy and to minimize chamber wall heating. The results of analyses on three test fixture configurations and the rationale for final selection are presented. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.
Wang, T.-S.; Foote, John; Litchford, Ron
2006-01-20
The objective of this effort is to perform design analyses for a non-nuclear hot-hydrogen materials tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, and thermal radiative heat transfers. The goals of the design analyses are to maintain maximum hot-hydrogen jet impingement energy and to minimize chamber wall heating. The results of analyses on three test fixture configurations and the rationale for final selection are presented. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.
PREFACE: 32nd UIT (Italian Union of Thermo-fluid-dynamics) Heat Transfer Conference
NASA Astrophysics Data System (ADS)
2014-11-01
The annual Conference of the ''Unione Italiana di Termofluidodinamica'' (UIT) aims to promote cooperation in the field of heat transfer and thermal sciences by bringing together scientists and engineers working in related areas. The 32nd UIT Conference was held in Pisa, from the 23rd to the 25th of June, 2014 in the buildings of the School of Engineering, just a few months after the celebration of the 100th anniversary of the first Institution of the School of Engineering at the University of Pisa. The response was very good, with more than 100 participants and 80 high-quality contributions from 208 authors on seven different heat transfer related topics: Heat transfer and efficiency in energy systems, environmental technologies, and buildings (25 papers); Micro and nano scale thermo-fluid dynamics (9 papers); Multi-phase fluid dynamics, heat transfer and interface phenomena (14 papers); Computational fluid dynamics and heat transfer (10 papers); Heat transfer in nuclear plants (8 papers); Natural, forced and mixed convection (10 papers) and Conduction and radiation (4 papers). To encourage the debate, the Conference Program scheduled 16 oral sessions (44 papers), three ample poster sessions (36 papers) and four invited lectures given by experts in the various fields both from Industry and from University. Keynote Lectures were given by Dr. Roberto Parri (ENEL, Italy), Prof. Peter Stephan (TU Darmstadt, Germany), Prof. Bruno Panella (Politecnico di Torino), and Prof. Sara Rainieri (Universit;aacute; di Parma). This special volume collects a selection of the scientific contributions discussed during this conference. A total of 46 contributions, two keynote lectures and 44 papers both from oral and poster sessions, have been selected for publication in this special issue, after a second accurate revision process. These works give a good overview of the state of the art of Italian research in the field of Heat Transfer related topics at the date. The editors of the
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Leclair, Andre; Moore, Ric; Schallhorn, Paul
2011-01-01
GFSSP stands for Generalized Fluid System Simulation Program. It is a general-purpose computer program to compute pressure, temperature and flow distribution in a flow network. GFSSP calculates pressure, temperature, and concentrations at nodes and calculates flow rates through branches. It was primarily developed to analyze Internal Flow Analysis of a Turbopump Transient Flow Analysis of a Propulsion System. GFSSP development started in 1994 with an objective to provide a generalized and easy to use flow analysis tool for thermo-fluid systems.
NASA Technical Reports Server (NTRS)
Perrell, Eric R.
2005-01-01
The recent bold initiatives to expand the human presence in space require innovative approaches to the design of propulsion systems whose underlying technology is not yet mature. The space propulsion community has identified a number of candidate concepts. A short list includes solar sails, high-energy-density chemical propellants, electric and electromagnetic accelerators, solar-thermal and nuclear-thermal expanders. For each of these, the underlying physics are relatively well understood. One could easily cite authoritative texts, addressing both the governing equations, and practical solution methods for, e.g. electromagnetic fields, heat transfer, radiation, thermophysics, structural dynamics, particulate kinematics, nuclear energy, power conversion, and fluid dynamics. One could also easily cite scholarly works in which complete equation sets for any one of these physical processes have been accurately solved relative to complex engineered systems. The Advanced Concepts and Analysis Office (ACAO), Space Transportation Directorate, NASA Marshall Space Flight Center, has recently released the first alpha version of a set of computer utilities for performing the applicable physical analyses relative to candidate deep-space propulsion systems such as those listed above. PARSEC, Preliminary Analysis of Revolutionary in-Space Engineering Concepts, enables rapid iterative calculations using several physics tools developed in-house. A complete cycle of the entire tool set takes about twenty minutes. PARSEC is a level-zero/level-one design tool. For PARSEC s proof-of-concept, and preliminary design decision-making, assumptions that significantly simplify the governing equation sets are necessary. To proceed to level-two, one wishes to retain modeling of the underlying physics as close as practical to known applicable first principles. This report describes results of collaboration between ACAO, and Embry-Riddle Aeronautical University (ERAU), to begin building a set of
PREFACE: 33rd UIT (Italian Union of Thermo-fluid dynamics) Heat Transfer Conference
NASA Astrophysics Data System (ADS)
Paoletti, Domenica; Ambrosini, Dario; Sfarra, Stefano
2015-11-01
The 33rd UIT (Italian Union of Thermo-Fluid Dynamics) Heat Transfer Conference was organized by the Dept. of Industrial and Information Engineering and Economics, University of L'Aquila (Italy) and was held at the Engineering Campus of Monteluco di Roio, L'Aquila, June 22-24, 2015. The annual UIT conference, which has grown over time, came back to L'Aquila after 21 years. The scope of the conference covers a range of major topics in theoretical, numerical and experimental heat transfer and related areas, ranging from energy efficiency to nuclear plants. This year, there was an emphasis on IR thermography, which is growing in importance both in scientific research and industrial applications. 2015 is also the International Year of Light. The Organizing Committee honored this event by introducing a new section, Technical Seminars, which in this edition was mainly devoted to optical flow visualization (also the subject of three different national workshops organized in L'Aquila by UIT in 2003, 2005 and 2008). The conference was held in the recently repaired Engineering buildings, six years after the 2009 earthquake and 50 years after the beginning of the Engineering courses in L'Aquila. Despite some logistical difficulties, 92 papers were submitted by about 270 authors, on eight different topics: heat transfer and efficiency in energy systems, environmental technologies and buildings (32 papers); micro and nano scale thermo-fluid dynamics (5 papers); multi-phase fluid dynamics, heat transfer and interface phenomena (16 papers); computational fluid dynamics and heat transfer (15 papers); heat transfer in nuclear plants (6 papers); natural, forced and mixed convection (6 papers); IR thermography (4 papers); conduction and radiation (3 papers). The conference program scheduled plenary, oral and poster sessions. The three invited plenary Keynote Lectures were given by Prof. Antonio Barletta (University of Bologna, Italy), Prof. Jean-Christophe Batsale (Arts et Metiers
Effects of finiteness on the thermo-fluid-dynamics of natural convection above horizontal plates
NASA Astrophysics Data System (ADS)
Guha, Abhijit; Sengupta, Sayantan
2016-06-01
A rigorous and systematic computational and theoretical study, the first of its kind, for the laminar natural convective flow above rectangular horizontal surfaces of various aspect ratios ϕ (from 1 to ∞) is presented. Two-dimensional computational fluid dynamic (CFD) simulations (for ϕ → ∞) and three-dimensional CFD simulations (for 1 ≤ ϕ < ∞) are performed to establish and elucidate the role of finiteness of the horizontal planform on the thermo-fluid-dynamics of natural convection. Great care is taken here to ensure grid independence and domain independence of the presented solutions. The results of the CFD simulations are compared with experimental data and similarity theory to understand how the existing simplified results fit, in the appropriate limiting cases, with the complex three-dimensional solutions revealed here. The present computational study establishes the region of a high-aspect-ratio planform over which the results of the similarity theory are approximately valid, the extent of this region depending on the Grashof number. There is, however, a region near the edge of the plate and another region near the centre of the plate (where a plume forms) in which the similarity theory results do not apply. The sizes of these non-compliance zones decrease as the Grashof number is increased. The present study also shows that the similarity velocity profile is not strictly obtained at any location over the plate because of the entrainment effect of the central plume. The 3-D CFD simulations of the present paper are coordinated to clearly reveal the separate and combined effects of three important aspects of finiteness: the presence of leading edges, the presence of planform centre, and the presence of physical corners in the planform. It is realised that the finiteness due to the presence of physical corners in the planform arises only for a finite value of ϕ in the case of 3-D CFD simulations (and not in 2-D CFD simulations or similarity theory
Multiphysics Application Coupling Toolkit
Campbell, Michael T.
2013-12-02
This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems; with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.
3D time dependent thermo-fluid dynamic model of ground deformation at Campi Flegrei caldera
NASA Astrophysics Data System (ADS)
Castaldo, R.; Tizzani, P.; Manconi, A.; Manzo, M.; Pepe, S.; Pepe, A.; Lanari, R.
2012-04-01
In active volcanic areas deformation signals are generally characterized by non-linear spatial and temporal variations [Tizzani P. et al., 2007]. This behaviour has been revealed in the last two decades by the so-called advanced DInSAR processing algorithms, developed to analyze surface deformation phenomena [Berardino P. et al., 2002; Ferretti C. et al., 2001]. Notwithstanding, most of the inverse modelling attempts to characterize the evolution of the volcanic sources are based on the assumption that the Earth's crust behaves as a homogeneous linear elastic material. However, the behaviour of the upper lithosphere in thermally anomalous regions (as active volcanoes are) might be well described as a non-Newtonian fluid, where some of the material proprieties of the rocks (i.e., apparent viscosities) can change over time [Pinkerton H. et al., 1995]. In this context, we considered the thermal proprieties and mechanical heterogeneities of the upper crust in order to develop a new 3D time dependent thermo-fluid dynamic model of Campi Flegrei (CF) caldera, Southern Italy. More specifically, according to Tizzani P. et al. (2010), we integrated in a FEM environment geophysical information (gravimetric, seismic, and borehole data) available for the considered area and performed two FEM optimization procedures to constrain the 3D distribution of unknown physical parameters (temperature and viscosity distributions) that might help explaining the data observed at surface (geothermal wells and DInSAR measurements). First, we searched for the heat production, the volume source distribution and surface emissivity parameters providing the best-fit of the geothermal profiles data measured at six boreholes [Agip ESGE, 1986], by solving the Fourier heat equation over time (about 40 kys). The 3D thermal field resulting from this optimization was used to calculate the 3D brittle-ductile transition. This analysis revealed the presence of a ductile region, located beneath the centre of
Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul
1998-01-01
This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.
Mingus Discontinuous Multiphysics
Pat Notz, Dan Turner
2014-05-13
Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model
Multiphysics Simulations: Challenges and Opportunities
Michael Pernice
2013-02-01
We consider multiphysics applications from algorithmic and architectural perspectives, where "algorithmic" includes both mathematical analysis and computational complexity, and "architectural" includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.
Multiphysics Simulations: Challenges and Opportunities
Keyes, David E; McInnes, Lois; Woodward, Carol; Evans, Katherine J; Hill, Judith C
2013-01-01
We consider multiphysics applications from algorithmic and architectural perspectives, where algorithmic in- cludes both mathematical analysis and computational complexity and architectural includes both software and hard- ware environments. Many diverse multiphysics applications can be reduced, en route to their computational simu- lation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, ex- pose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.
Multiphysics Simulations: Challenges and Opportunities
Keyes, David; McInnes, Lois C.; Woodward, Carol; Gropp, William; Myra, Eric; Pernice, Michael; Bell, John; Brown, Jed; Clo, Alain; Connors, Jeffrey; Constantinescu, Emil; Estep, Don; Evans, Kate; Farhat, Charbel; Hakim, Ammar; Hammond, Glenn E.; Hansen, Glen; Hill, Judith; Isaac, Tobin; Jiao, Xiangmin; Jordan, Kirk; Kaushik, Dinesh; Kaxiras, Efthimios; Koniges, Alice; Lee, Ki Hwan; Lott, Aaron; Lu, Qiming; Magerlein, John; Maxwell, Reed M.; McCourt, Michael; Mehl, Miriam; Pawlowski, Roger; Randles, Amanda; Reynolds, Daniel; Riviere, Beatrice; Rude, Ulrich; Scheibe, Timothy D.; Shadid, John; Sheehan, Brendan; Shephard, Mark; Siegel, Andrew; Smith, Barry; Tang, Xianzhu; Wilson, Cian; Wohlmuth, Barbara
2013-02-12
We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.
Multiphysics simulations: challenges and opportunities.
Keyes, D.; McInnes, L. C.; Woodward, C.; Gropp, W.; Myra, E.; Pernice, M.
2012-11-29
This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.
Multiphysics analysis of liquid metal annular linear induction pumps: A project overview
Maidana, Carlos Omar; Nieminen, Juha E.
2016-03-14
Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flowmore » rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.« less
Multiphysics analysis of liquid metal annular linear induction pumps: A project overview
Maidana, Carlos Omar; Nieminen, Juha E.
2016-03-14
Liquid metal-cooled fission reactors are both moderated and cooled by a liquid metal solution. These reactors are typically very compact and they can be used in regular electric power production, for naval and space propulsion systems or in fission surface power systems for planetary exploration. The coupling between the electromagnetics and thermo-fluid mechanical phenomena observed in liquid metal thermo-magnetic systems for nuclear and space applications gives rise to complex engineering magnetohydrodynamics and numerical problems. It is known that electromagnetic pumps have a number of advantages over rotating mechanisms: absence of moving parts, low noise and vibration level, simplicity of flow rate regulation, easy maintenance and so on. However, while developing annular linear induction pumps, we are faced with a significant problem of magnetohydrodynamic instability arising in the device. The complex flow behavior in this type of devices includes a time-varying Lorentz force and pressure pulsation due to the time-varying electromagnetic fields and the induced convective currents that originates from the liquid metal flow, leading to instability problems along the device geometry. The determinations of the geometry and electrical configuration of liquid metal thermo-magnetic devices give rise to a complex inverse magnetohydrodynamic field problem were techniques for global optimization should be used, magnetohydrodynamics instabilities understood –or quantified- and multiphysics models developed and analyzed. Lastly, we present a project overview as well as a few computational models developed to study liquid metal annular linear induction pumps using first principles and the a few results of our multi-physics analysis.
Yu, Y. Q.; Shemon, E. R.; Mahadevan, Vijay S.; Rahaman, Ronald O.
2016-02-29
SHARP, developed under the NEAMS Reactor Product Line, is an advanced modeling and simulation toolkit for the analysis of advanced nuclear reactors. SHARP is comprised of three physics modules currently including neutronics, thermal hydraulics, and structural mechanics. SHARP empowers designers to produce accurate results for modeling physical phenomena that have been identified as important for nuclear reactor analysis. SHARP can use existing physics codes and take advantage of existing infrastructure capabilities in the MOAB framework and the coupling driver/solver library, the Coupled Physics Environment (CouPE), which utilizes the widely used, scalable PETSc library. This report aims at identifying the coupled-physics simulation capability of SHARP by introducing the demonstration example called sahex in advance of the SHARP release expected by Mar 2016. sahex consists of 6 fuel pins with cladding, 1 control rod, sodium coolant and an outer duct wall that encloses all the other components. This example is carefully chosen to demonstrate the proof of concept for solving more complex demonstration examples such as EBR II assembly and ABTR full core. The workflow of preparing the input files, running the case and analyzing the results is demonstrated in this report. Moreover, an extension of the sahex model called sahex_core, which adds six homogenized neighboring assemblies to the full heterogeneous sahex model, is presented to test homogenization capabilities in both Nek5000 and PROTEUS. Some primary information on the configuration and build aspects for the SHARP toolkit, which includes capability to auto-download dependencies and configure/install with optimal flags in an architecture-aware fashion, is also covered by this report. A step-by-step instruction is provided to help users to create their cases. Details on these processes will be provided in the SHARP user manual that will accompany the first release.
Donald Estep; Michael Holst; Simon Tavener
2010-02-08
This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.
Multiphysics Integrated Coupling Environment (MICE) User Manual
Varija Agarwal; Donna Post Guillen
2013-08-01
The complex, multi-part nature of waste glass melters used in nuclear waste vitrification poses significant modeling challenges. The focus of this project has been to couple a 1D MATLAB model of the cold cap region within a melter with a 3D STAR-CCM+ model of the melter itself. The Multiphysics Integrated Coupling Environment (MICE) has been developed to create a cohesive simulation of a waste glass melter that accurately represents the cold cap. The one-dimensional mathematical model of the cold cap uses material properties, axial heat, and mass fluxes to obtain a temperature profile for the cold cap, the region where feed-to-glass conversion occurs. The results from Matlab are used to update simulation data in the three-dimensional STAR-CCM+ model so that the cold cap is appropriately incorporated into the 3D simulation. The two processes are linked through ModelCenter integration software using time steps that are specified for each process. Data is to be exchanged circularly between the two models, as the inputs and outputs of each model depend on the other.
Rao, T S; Kora, Aruna Jyothi; Chandramohan, P; Panigrahi, B S; Narasimhan, S V
2009-10-01
This article discusses aspects of biofouling and corrosion in the thermo-fluid heat exchanger (TFHX) and in the cooling water system of a nuclear test reactor. During inspection, it was observed that >90% of the TFHX tube bundle was clogged with thick fouling deposits. Both X-ray diffraction and Mossbauer analyses of the fouling deposit demonstrated iron corrosion products. The exterior of the tubercle showed the presence of a calcium and magnesium carbonate mixture along with iron oxides. Raman spectroscopy analysis confirmed the presence of calcium carbonate scale in the calcite phase. The interior of the tubercle contained significant iron sulphide, magnetite and iron-oxy-hydroxide. A microbiological assay showed a considerable population of iron oxidizing bacteria and sulphate reducing bacteria (10(5) to 10(6) cfu g(-1) of deposit). As the temperature of the TFHX is in the range of 45-50 degrees C, the microbiota isolated/assayed from the fouling deposit are designated as thermo-tolerant bacteria. The mean corrosion rate of the CS coupons exposed online was approximately 2.0 mpy and the microbial counts of various corrosion causing bacteria were in the range 10(3) to 10(5) cfu ml(-1) in the cooling water and 10(6) to 10(8) cfu ml(-1) in the biofilm.
NASA Astrophysics Data System (ADS)
Qu, Zuopeng; Aravind, P. V.; Dekker, N. J. J.; Janssen, A. H. H.; Woudstra, N.; Verkooijen, A. H. M.
This paper presents a three-dimensional model of an anode-supported planar solid oxide fuel cell with corrugated bipolar plates serving as gas channels and current collector above the active area of the cell. Conservation equations of mass, momentum, energy and species are solved incorporating the electrochemical reactions. Heat transfer due to conduction, convection and radiation is included. An empirical equation for cell resistance with measured values for different parameters is used for the calculations. Distribution of temperature and gas concentrations in the PEN (positive electrode/electrolyte/negative electrode) structure and gas channels are investigated. Variation of current density over the cell is studied. Furthermore, the effect of radiation on the temperature distribution is studied and discussed. Modeling results show that the relatively uniform current density is achieved at given conditions for the proposed design and the inclusion of thermal radiation is required for accurate prediction of temperature field in the single cell unit.
Multiphysics Object Oriented Simulation Environment
2014-02-12
The Multiphysics Object Oriented Simulation Environment (MOOSE) software library developed at Idaho National Laboratory is a tool. MOOSE, like other tools, doesnt actually complete a task. Instead, MOOSE seeks to reduce the effort required to create engineering simulation applications. MOOSE itself is a software library: a blank canvas upon which you write equations and then MOOSE can help you solve them. MOOSE is comparable to a spreadsheet application. A spreadsheet, by itself, doesnt do anything. Only once equations are entered into it will a spreadsheet application compute anything. Such is the same for MOOSE. An engineer or scientist can utilize the equation solvers within MOOSE to solve equations related to their area of study. For instance, a geomechanical scientist can input equations related to water flow in underground reservoirs and MOOSE can solve those equations to give the scientist an idea of how water could move over time. An engineer might input equations related to the forces in steel beams in order to understand the load bearing capacity of a bridge. Because MOOSE is a blank canvas it can be useful in many scientific and engineering pursuits.
Multiphysics Applications of ACE3P
K.H. Lee, C. Ko, Z. Li, C.-K. Ng, L. Xiao, G. Cheng, H. Wang
2012-07-01
The TEM3P module of ACE3P, a parallel finite-element electromagnetic code suite from SLAC, focuses on the multiphysics simulation capabilities, including thermal and mechanical analysis for accelerator applications. In this pa- per, thermal analysis of coupler feedthroughs to supercon- ducting rf (SRF) cavities will be presented. For the realistic simulation, internal boundary condition is implemented to capture RF heating effects on the surface shared by a di- electric and a conductor. The multiphysics simulation with TEM3P matched the measurement within 0.4%.
Scalable Adaptive Multilevel Solvers for Multiphysics Problems
Xu, Jinchao
2014-11-26
In this project, we carried out many studies on adaptive and parallel multilevel methods for numerical modeling for various applications, including Magnetohydrodynamics (MHD) and complex fluids. We have made significant efforts and advances in adaptive multilevel methods of the multiphysics problems: multigrid methods, adaptive finite element methods, and applications.
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
Cetiner, Mustafa Sacit; none,; Flanagan, George F.; Poore III, Willis P.; Muhlheim, Michael David
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.
Structure-coupled multiphysics imaging in geophysical sciences
NASA Astrophysics Data System (ADS)
Gallardo, Luis A.; Meju, Max A.
2011-03-01
Multiphysics imaging or data inversion is of growing importance in many branches of science and engineering. In geophysical sciences, there is a need for combining information from multiple images acquired using different imaging devices and/or modalities because of the potential for accurate predictions. The major challenges are how to combine disparate data from unrelated physical phenomena, taking into account the different spatial scales of the measurement devices, model complexities, and how to quantify the associated uncertainties. This review paper summarizes the role played by the structural gradients-based approach for coupling fundamentally different physical fields in (mainly) geophysical inversion, develops further understanding of this approach to guide newcomers to the field, and defines the main challenges and directions for future research that may be useful in other fields of science and engineering.
Multiphysics simulation of corona discharge induced ionic wind
Cagnoni, Davide; Agostini, Francesco; Christen, Thomas; Parolini, Nicola; Stevanović, Ivica; Falco, Carlo de
2013-12-21
Ionic wind devices or electrostatic fluid accelerators are becoming of increasing interest as tools for thermal management, in particular for semiconductor devices. In this work, we present a numerical model for predicting the performance of such devices; its main benefit is the ability to accurately predict the amount of charge injected from the corona electrode. Our multiphysics numerical model consists of a highly nonlinear, strongly coupled set of partial differential equations including the Navier-Stokes equations for fluid flow, Poisson's equation for electrostatic potential, charge continuity, and heat transfer equations. To solve this system we employ a staggered solution algorithm that generalizes Gummel's algorithm for charge transport in semiconductors. Predictions of our simulations are verified and validated by comparison with experimental measurements of integral physical quantities, which are shown to closely match.
MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS
Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston
2013-05-01
As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.
Massive hybrid parallelism for fully implicit multiphysics
Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.
2013-07-01
As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)
An Anisotropic Multiphysics Model for Intervertebral Disk
Gao, Xin; Zhu, Qiaoqiao; Gu, Weiyong
2016-01-01
Intervertebral disk (IVD) is the largest avascular structure in human body, consisting of three types of charged hydrated soft tissues. Its mechanical behavior is nonlinear and anisotropic, due mainly to nonlinear interactions among different constituents within tissues. In this study, a more realistic anisotropic multiphysics model was developed based on the continuum mixture theory and employed to characterize the couplings of multiple physical fields in the IVD. Numerical simulations demonstrate that this model is capable of systematically predicting the mechanical and electrochemical signals within the disk under various loading conditions, which is essential in understanding the mechanobiology of IVD. PMID:27099402
Time-parallel multiscale/multiphysics framework
Frantziskonis, G.; Muralidharan, Krishna; Deymier, Pierre; Simunovic, Srdjan; Nukala, Phani K; Pannala, Sreekanth
2009-01-01
We introduce the time-parallel compound wavelet matrix method (tpCWM) for modeling the temporal evolution of multiscale and multiphysics systems. The method couples time parallel (TP) and CWM methods operating at different spatial and temporal scales. We demonstrate the efficiency of our approach on two examples: a chemical reaction kinetic system and a non-linear predator prey system. Our results indicate that the tpCWM technique is capable of accelerating time-to-solution by 2 3-orders of magnitude and is amenable to efficient parallel implementation.
Advanced laser modeling with BLAZE multiphysics
NASA Astrophysics Data System (ADS)
Palla, Andrew D.; Carroll, David L.; Gray, Michael I.; Suzuki, Lui
2017-01-01
The BLAZE Multiphysics™ software simulation suite was specifically developed to model highly complex multiphysical systems in a computationally efficient and highly scalable manner. These capabilities are of particular use when applied to the complexities associated with high energy laser systems that combine subsonic/transonic/supersonic fluid dynamics, chemically reacting flows, laser electronics, heat transfer, optical physics, and in some cases plasma discharges. In this paper we present detailed cw and pulsed gas laser calculations using the BLAZE model with comparisons to data. Simulations of DPAL, XPAL, ElectricOIL (EOIL), and the optically pumped rare gas laser were found to be in good agreement with experimental data.
Schlbeiri, T. . Dept. of Mechanical Engineering)
1990-03-01
The results of the study of the optimum thermo-fluid dynamic design concept are presented for turbine units operating within the open-cycle ocean thermal energy conversion (OC-OTEC) systems. The concept is applied to the first OC-OTEC net power producing experiment (NPPE) facility to be installed at Hawaii's natural energy laboratory. Detailed efficiency and performance calculations were performed for the radial turbine design concept with single and double-inflow arrangements. To complete the study, the calculation results for a single-stage axial steam turbine design are also presented. In contrast to the axial flow design with a relatively low unit efficiency, higher efficiency was achieved for single-inflow turbines. Highest efficiency was calculated for a double-inflow radial design, which opens new perspectives for energy generation from OC-OTEC systems.
Multidimensional Multiphysics Simulation of TRISO Particle Fuel
J. D. Hales; R. L. Williamson; S. R. Novascone; D. M. Perez; B. W. Spencer; G. Pastore
2013-11-01
Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical and material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.
Multidimensional multiphysics simulation of TRISO particle fuel
NASA Astrophysics Data System (ADS)
Hales, J. D.; Williamson, R. L.; Novascone, S. R.; Perez, D. M.; Spencer, B. W.; Pastore, G.
2013-11-01
Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite element nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellent comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. The code's ability to use the same algorithms and models to solve problems of varying dimensionality from 1D through 3D is demonstrated. The code provides rapid solutions of 1D spherically symmetric and 2D axially symmetric models, and its scalable parallel processing capability allows for solutions of large, complex 3D models. Additionally, the flexibility to easily include new physical and material models and straightforward ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.
Multiphysics and Multiscale Analysis for Chemotherapeutic Drug
Zhang, Linan; Kim, Sung Youb; Kim, Dongchoul
2015-01-01
This paper presents a three-dimensional dynamic model for the chemotherapy design based on a multiphysics and multiscale approach. The model incorporates cancer cells, matrix degrading enzymes (MDEs) secreted by cancer cells, degrading extracellular matrix (ECM), and chemotherapeutic drug. Multiple mechanisms related to each component possible in chemotherapy are systematically integrated for high reliability of computational analysis of chemotherapy. Moreover, the fidelity of the estimated efficacy of chemotherapy is enhanced by atomic information associated with the diffusion characteristics of chemotherapeutic drug, which is obtained from atomic simulations. With the developed model, the invasion process of cancer cells in chemotherapy treatment is quantitatively investigated. The performed simulations suggest a substantial potential of the presented model for a reliable design technology of chemotherapy treatment. PMID:26491672
COMSOL MULTIPHYSICS MODEL FOR DWPF CANISTER FILLING
Kesterson, M.
2011-03-31
The purpose of this work was to develop a model that can be used to predict temperatures of the glass in the Defense Waste Processing Facility (DWPF) canisters during filling and cooldown. Past attempts to model these processes resulted in large (>200K) differences in predicted temperatures compared to experimentally measured temperatures. This work was therefore intended to also generate a model capable of reproducing the experimentally measured trends of the glass/canister temperature during filling and subsequent cooldown of DWPF canisters. To accomplish this, a simplified model was created using the finite element modeling software COMSOL Multiphysics which accepts user defined constants or expressions to describe material properties. The model results were compared to existing experimental data for validation. A COMSOL Multiphysics model was developed to predict temperatures of the glass within DWPF canisters during filling and cooldown. The model simulations and experimental data were in good agreement. The largest temperature deviations were {approx}40 C for the 87inch thermocouple location at 3000 minutes and during the initial cooldown at the 51 inch location occurring at approximately 600 minutes. Additionally, the model described in this report predicts the general trends in temperatures during filling and cooling observed experimentally. However, the model was developed using parameters designed to fit a single set of experimental data. Therefore, Q-loss is not currently a function of pour rate and pour temperature. Future work utilizing the existing model should include modifying the Q-loss term to be variable based on flow rate and pour temperature. Further enhancements could include eliminating the Q-loss term for a user defined convection where Navier-Stokes does not need to be solved in order to have convection heat transfer.
Multiscale multiphysics and multidomain models—Flexibility and rigidity
Xia, Kelin; Opron, Kristopher; Wei, Guo-Wei
2013-11-21
The emerging complexity of large macromolecules has led to challenges in their full scale theoretical description and computer simulation. Multiscale multiphysics and multidomain models have been introduced to reduce the number of degrees of freedom while maintaining modeling accuracy and achieving computational efficiency. A total energy functional is constructed to put energies for polar and nonpolar solvation, chemical potential, fluid flow, molecular mechanics, and elastic dynamics on an equal footing. The variational principle is utilized to derive coupled governing equations for the above mentioned multiphysical descriptions. Among these governing equations is the Poisson-Boltzmann equation which describes continuum electrostatics with atomic charges. The present work introduces the theory of continuum elasticity with atomic rigidity (CEWAR). The essence of CEWAR is to formulate the shear modulus as a continuous function of atomic rigidity. As a result, the dynamics complexity of a macromolecular system is separated from its static complexity so that the more time-consuming dynamics is handled with continuum elasticity theory, while the less time-consuming static analysis is pursued with atomic approaches. We propose a simple method, flexibility-rigidity index (FRI), to analyze macromolecular flexibility and rigidity in atomic detail. The construction of FRI relies on the fundamental assumption that protein functions, such as flexibility, rigidity, and energy, are entirely determined by the structure of the protein and its environment, although the structure is in turn determined by all the interactions. As such, the FRI measures the topological connectivity of protein atoms or residues and characterizes the geometric compactness of the protein structure. As a consequence, the FRI does not resort to the interaction Hamiltonian and bypasses matrix diagonalization, which underpins most other flexibility analysis methods. FRI's computational complexity is of O
Multiscale multiphysics and multidomain models—Flexibility and rigidity
Xia, Kelin; Opron, Kristopher; Wei, Guo-Wei
2013-01-01
The emerging complexity of large macromolecules has led to challenges in their full scale theoretical description and computer simulation. Multiscale multiphysics and multidomain models have been introduced to reduce the number of degrees of freedom while maintaining modeling accuracy and achieving computational efficiency. A total energy functional is constructed to put energies for polar and nonpolar solvation, chemical potential, fluid flow, molecular mechanics, and elastic dynamics on an equal footing. The variational principle is utilized to derive coupled governing equations for the above mentioned multiphysical descriptions. Among these governing equations is the Poisson-Boltzmann equation which describes continuum electrostatics with atomic charges. The present work introduces the theory of continuum elasticity with atomic rigidity (CEWAR). The essence of CEWAR is to formulate the shear modulus as a continuous function of atomic rigidity. As a result, the dynamics complexity of a macromolecular system is separated from its static complexity so that the more time-consuming dynamics is handled with continuum elasticity theory, while the less time-consuming static analysis is pursued with atomic approaches. We propose a simple method, flexibility-rigidity index (FRI), to analyze macromolecular flexibility and rigidity in atomic detail. The construction of FRI relies on the fundamental assumption that protein functions, such as flexibility, rigidity, and energy, are entirely determined by the structure of the protein and its environment, although the structure is in turn determined by all the interactions. As such, the FRI measures the topological connectivity of protein atoms or residues and characterizes the geometric compactness of the protein structure. As a consequence, the FRI does not resort to the interaction Hamiltonian and bypasses matrix diagonalization, which underpins most other flexibility analysis methods. FRI's computational complexity is of
Multiscale multiphysics and multidomain models--flexibility and rigidity.
Xia, Kelin; Opron, Kristopher; Wei, Guo-Wei
2013-11-21
The emerging complexity of large macromolecules has led to challenges in their full scale theoretical description and computer simulation. Multiscale multiphysics and multidomain models have been introduced to reduce the number of degrees of freedom while maintaining modeling accuracy and achieving computational efficiency. A total energy functional is constructed to put energies for polar and nonpolar solvation, chemical potential, fluid flow, molecular mechanics, and elastic dynamics on an equal footing. The variational principle is utilized to derive coupled governing equations for the above mentioned multiphysical descriptions. Among these governing equations is the Poisson-Boltzmann equation which describes continuum electrostatics with atomic charges. The present work introduces the theory of continuum elasticity with atomic rigidity (CEWAR). The essence of CEWAR is to formulate the shear modulus as a continuous function of atomic rigidity. As a result, the dynamics complexity of a macromolecular system is separated from its static complexity so that the more time-consuming dynamics is handled with continuum elasticity theory, while the less time-consuming static analysis is pursued with atomic approaches. We propose a simple method, flexibility-rigidity index (FRI), to analyze macromolecular flexibility and rigidity in atomic detail. The construction of FRI relies on the fundamental assumption that protein functions, such as flexibility, rigidity, and energy, are entirely determined by the structure of the protein and its environment, although the structure is in turn determined by all the interactions. As such, the FRI measures the topological connectivity of protein atoms or residues and characterizes the geometric compactness of the protein structure. As a consequence, the FRI does not resort to the interaction Hamiltonian and bypasses matrix diagonalization, which underpins most other flexibility analysis methods. FRI's computational complexity is of O
The Role of Multiphysics Simulation in Multidisciplinary Analysis
NASA Technical Reports Server (NTRS)
Rifai, Steven M.; Ferencz, Robert M.; Wang, Wen-Ping; Spyropoulos, Evangelos T.; Lawrence, Charles; Melis, Matthew E.
1998-01-01
This article describes the applications of the Spectrum(Tm) Solver in Multidisciplinary Analysis (MDA). Spectrum, a multiphysics simulation software based on the finite element method, addresses compressible and incompressible fluid flow, structural, and thermal modeling as well as the interaction between these disciplines. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena. Interaction constraints are enforced in a fully-coupled manner using the augmented-Lagrangian method. Within the multiphysics framework, the finite element treatment of fluids is based on Galerkin-Least-Squares (GLS) method with discontinuity capturing operators. The arbitrary-Lagrangian-Eulerian method is utilized to account for deformable fluid domains. The finite element treatment of solids and structures is based on the Hu-Washizu variational principle. The multiphysics architecture lends itself naturally to high-performance parallel computing. Aeroelastic, propulsion, thermal management and manufacturing applications are presented.
Parallel multiphysics algorithms and software for computational nuclear engineering
NASA Astrophysics Data System (ADS)
Gaston, D.; Hansen, G.; Kadioglu, S.; Knoll, D. A.; Newman, C.; Park, H.; Permann, C.; Taitano, W.
2009-07-01
There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple code coupling or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE-based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.
Paralel Multiphysics Algorithms and Software for Computational Nuclear Engineering
D. Gaston; G. Hansen; S. Kadioglu; D. A. Knoll; C. Newman; H. Park; C. Permann; W. Taitano
2009-08-01
There is a growing trend in nuclear reactor simulation to consider multiphysics problems. This can be seen in reactor analysis where analysts are interested in coupled flow, heat transfer and neutronics, and in fuel performance simulation where analysts are interested in thermomechanics with contact coupled to species transport and chemistry. These more ambitious simulations usually motivate some level of parallel computing. Many of the coupling efforts to date utilize simple 'code coupling' or first-order operator splitting, often referred to as loose coupling. While these approaches can produce answers, they usually leave questions of accuracy and stability unanswered. Additionally, the different physics often reside on separate grids which are coupled via simple interpolation, again leaving open questions of stability and accuracy. Utilizing state of the art mathematics and software development techniques we are deploying next generation tools for nuclear engineering applications. The Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning provide the underlying mathematical structure for our tools. JFNK is understood to be a modern multiphysics algorithm, but we are also utilizing its unique properties as a scale bridging algorithm. To facilitate rapid development of multiphysics applications we have developed the Multiphysics Object-Oriented Simulation Environment (MOOSE). Examples from two MOOSE based applications: PRONGHORN, our multiphysics gas cooled reactor simulation tool and BISON, our multiphysics, multiscale fuel performance simulation tool will be presented.
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith
2011-07-01
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.
Optimization of coupled multiphysics methodology for safety analysis of pebble bed modular reactor
NASA Astrophysics Data System (ADS)
Mkhabela, Peter Tshepo
The research conducted within the framework of this PhD thesis is devoted to the high-fidelity multi-physics (based on neutronics/thermal-hydraulics coupling) analysis of Pebble Bed Modular Reactor (PBMR), which is a High Temperature Reactor (HTR). The Next Generation Nuclear Plant (NGNP) will be a HTR design. The core design and safety analysis methods are considerably less developed and mature for HTR analysis than those currently used for Light Water Reactors (LWRs). Compared to LWRs, the HTR transient analysis is more demanding since it requires proper treatment of both slower and much longer transients (of time scale in hours and days) and fast and short transients (of time scale in minutes and seconds). There is limited operation and experimental data available for HTRs for validation of coupled multi-physics methodologies. This PhD work developed and verified reliable high fidelity coupled multi-physics models subsequently implemented in robust, efficient, and accurate computational tools to analyse the neutronics and thermal-hydraulic behaviour for design optimization and safety evaluation of PBMR concept The study provided a contribution to a greater accuracy of neutronics calculations by including the feedback from thermal hydraulics driven temperature calculation and various multi-physics effects that can influence it. Consideration of the feedback due to the influence of leakage was taken into account by development and implementation of improved buckling feedback models. Modifications were made in the calculation procedure to ensure that the xenon depletion models were accurate for proper interpolation from cross section tables. To achieve this, the NEM/THERMIX coupled code system was developed to create the system that is efficient and stable over the duration of transient calculations that last over several tens of hours. Another achievement of the PhD thesis was development and demonstration of full-physics, three-dimensional safety analysis
Tightly Coupled Multiphysics Algorithm for Pebble Bed Reactors
HyeongKae Park; Dana Knoll; Derek Gaston; Richard Martineau
2010-10-01
We have developed a tightly coupled multiphysics simulation tool for the pebble-bed reactor (PBR) concept, a type of Very High-Temperature gas-cooled Reactor (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation Environment library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly with a Newton-based approach. Expensive Jacobian matrix formation is alleviated via the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to minimize Krylov iterations. Motivation for the work is provided via analysis and numerical experiments on simpler multiphysics reactor models. We then provide detail of the physical models and numerical methods in PRONGHORN. Finally, PRONGHORN's algorithmic capability is demonstrated on a number of PBR test cases.
COMSOL Multiphysics Model for HLW Canister Filling
Kesterson, M. R.
2016-04-11
The U.S. Department of Energy (DOE) is building a Tank Waste Treatment and Immobilization Plant (WTP) at the Hanford Site in Washington to remediate 55 million gallons of radioactive waste that is being temporarily stored in 177 underground tanks. Efforts are being made to increase the loading of Hanford tank wastes in glass while meeting melter lifetime expectancies and process, regulatory, and product quality requirements. Wastes containing high concentrations of Al_{2}O_{3} and Na_{2}O can contribute to nepheline (generally NaAlSiO_{4}) crystallization, which can sharply reduce the chemical durability of high level waste (HLW) glass. Nepheline crystallization can occur during slow cooling of the glass within the stainless steel canister. The purpose of this work was to develop a model that can be used to predict temperatures of the glass in a WTP HLW canister during filling and cooling. The intent of the model is to support scoping work in the laboratory. It is not intended to provide precise predictions of temperature profiles, but rather to provide a simplified representation of glass cooling profiles within a full scale, WTP HLW canister under various glass pouring rates. These data will be used to support laboratory studies for an improved understanding of the mechanisms of nepheline crystallization. The model was created using COMSOL Multiphysics, a commercially available software. The model results were compared to available experimental data, TRR-PLT-080, and were found to yield sufficient results for the scoping nature of the study. The simulated temperatures were within 60 ºC for the centerline, 0.0762m (3 inch) from centerline, and 0.2286m (9 inch) from centerline thermocouples once the thermocouples were covered with glass. The temperature difference between the experimental and simulated values reduced to 40 ºC, 4 hours after the thermocouple was covered, and down to 20 ºC, 6 hours after the thermocouple was covered
Final Report: Quantifying Prediction Fidelity in Multiscale Multiphysics Simulations
Long, Kevin
2014-09-30
We have developed algorithms and software in support of uncertainty quantification in nonlinear multiphysics simulations. This work includes high-level, high-performance software for large-scale, matrix-free linear algebra and a new algorithm for fast computation of transcendental functions of stochastic variables.
A theory manual for multi-physics code coupling in LIME.
Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-03-01
The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.
Solid Rocket Motor Combustion Instability Modeling in COMSOL Multiphysics
NASA Technical Reports Server (NTRS)
Fischbach, Sean R.
2015-01-01
Combustion instability modeling of Solid Rocket Motors (SRM) remains a topic of active research. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process, acoustics, and steady-state gas dynamics. Recent advances in defining the energy transport of disturbances within steady flow-fields have been applied by combustion stability modelers to improve the analysis framework [1, 2, 3]. Employing this more accurate global energy balance requires a higher fidelity model of the SRM flow-field and acoustic mode shapes. The current industry standard analysis tool utilizes a one dimensional analysis of the time dependent fluid dynamics along with a quasi-three dimensional propellant grain regression model to determine the SRM ballistics. The code then couples with another application that calculates the eigenvalues of the one dimensional homogenous wave equation. The mean flow parameters and acoustic normal modes are coupled to evaluate the stability theory developed and popularized by Culick [4, 5]. The assumption of a linear, non-dissipative wave in a quiescent fluid remains valid while acoustic amplitudes are small and local gas velocities stay below Mach 0.2. The current study employs the COMSOL multiphysics finite element framework to model the steady flow-field parameters and acoustic normal modes of a generic SRM. The study requires one way coupling of the CFD High Mach Number Flow (HMNF) and mathematics module. The HMNF module evaluates the gas flow inside of a SRM using St. Robert's law to model the solid propellant burn rate, no slip boundary conditions, and the hybrid outflow condition. Results from the HMNF model are verified by comparing the pertinent ballistics parameters with the industry standard code outputs (i.e. pressure drop, thrust, ect.). These results are then used by the coefficient form of the mathematics module to determine the complex eigenvalues of the
Evaluation of HFIR LEU Fuel Using the COMSOL Multiphysics Platform
Primm, Trent; Ruggles, Arthur; Freels, James D
2009-03-01
A finite element computational approach to simulation of the High Flux Isotope Reactor (HFIR) Core Thermal-Fluid behavior is developed. These models were developed to facilitate design of a low enriched core for the HFIR, which will have different axial and radial flux profiles from the current HEU core and thus will require fuel and poison load optimization. This report outlines a stepwise implementation of this modeling approach using the commercial finite element code, COMSOL, with initial assessment of fuel, poison and clad conduction modeling capability, followed by assessment of mating of the fuel conduction models to a one dimensional fluid model typical of legacy simulation techniques for the HFIR core. The model is then extended to fully couple 2-dimensional conduction in the fuel to a 2-dimensional thermo-fluid model of the coolant for a HFIR core cooling sub-channel with additional assessment of simulation outcomes. Finally, 3-dimensional simulations of a fuel plate and cooling channel are presented.
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Mathematical and Computational Modeling of Multiphysics Couplings in the Geosciences
NASA Astrophysics Data System (ADS)
Wheeler, M. F.
2004-12-01
Multiphysics couplings can happen in different ways. Aƒâ_sA,A One may have different physical processes (e.g. flow, transport, reactions) occurring within the same physical domain, or one may have different physical regimes (e.g., surface/subsurface environments, fluid/structure interactions) interacting through interfaces. We will discuss both of these types of multiphysics couplings during this presentation. Of particular interest will be the development of interpolation/projection algorithms for projecting physical quantities from one space/time grid to another, the investigation of mortar and mortar-free methods for coupling multiple physical domains, and the coupling of non-conforming and conforming finite element methods.
Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code
NASA Technical Reports Server (NTRS)
Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.
2000-01-01
The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.
Multiphysics modeling and uncertainty quantification for an active composite reflector
NASA Astrophysics Data System (ADS)
Peterson, Lee D.; Bradford, S. C.; Schiermeier, John E.; Agnes, Gregory S.; Basinger, Scott A.
2013-09-01
A multiphysics, high resolution simulation of an actively controlled, composite reflector panel is developed to extrapolate from ground test results to flight performance. The subject test article has previously demonstrated sub-micron corrected shape in a controlled laboratory thermal load. This paper develops a model of the on-orbit performance of the panel under realistic thermal loads, with an active heater control system, and performs an uncertainty quantification of the predicted response. The primary contribution of this paper is the first reported application of the Sandia developed Sierra mechanics simulation tools to a spacecraft multiphysics simulation of a closed-loop system, including uncertainty quantification. The simulation was developed so as to have sufficient resolution to capture the residual panel shape error that remains after the thermal and mechanical control loops are closed. An uncertainty quantification analysis was performed to assess the predicted tolerance in the closed-loop wavefront error. Key tools used for the uncertainty quantification are also described.
COMSOL MULTIPHYSICS MODEL FOR DWPF CANISTER FILLING, REVISION 1
Kesterson, M.
2011-09-08
This revision is an extension of the COMSOL Multiphysics model previously developed and documented to simulate the temperatures of the glass during pouring a Defense Waste Processing Facility (DWPF) canister. In that report the COMSOL Multiphysics model used a lumped heat loss term derived from experimental thermocouple data based on a nominal pour rate of 228 lbs./hr. As such, the model developed using the lumped heat loss term had limited application without additional experimental data. Therefore, the COMSOL Multiphysics model was modified to simulate glass pouring and subsequent heat input which, replaced the heat loss term in the initial model. This new model allowed for changes in flow geometry based on pour rate as well as the ability to increase and decrease flow and stop and restart flow to simulate varying process conditions. A revised COMSOL Multiphysics model was developed to predict temperatures of the glass within DWPF canisters during filling and cooldown. The model simulations and experimental data were in good agreement. The largest temperature deviations were {approx} 40 C for the 87 inch thermocouple location at 3000 minutes and during the initial cool down at the 51 inch location occurring at approximately 600 minutes. Additionally, the model described in this report predicts the general temperature trends during filling and cooling as observed experimentally. The revised model incorporates a heat flow region corresponding to the glass pouring down the centerline of the canister. The geometry of this region is dependent on the flow rate of the glass and can therefore be used to see temperature variations for various pour rates. The equations used for this model were developed by comparing simulation output to experimental data from a single pour rate. Use of the model will predict temperature profiles for other pour rates but the accuracy of the simulations is unknown due to only a single flow rate comparison.
A MULTIDIMENSIONAL AND MULTIPHYSICS APPROACH TO NUCLEAR FUEL BEHAVIOR SIMULATION
R. L. Williamson; J. D. Hales; S. R. Novascone; M. R. Tonks; D. R. Gaston; C. J. Permann; D. Andrs; R. C. Martineau
2012-04-01
Important aspects of fuel rod behavior, for example pellet-clad mechanical interaction (PCMI), fuel fracture, oxide formation, non-axisymmetric cooling, and response to fuel manufacturing defects, are inherently multidimensional in addition to being complicated multiphysics problems. Many current modeling tools are strictly 2D axisymmetric or even 1.5D. This paper outlines the capabilities of a new fuel modeling tool able to analyze either 2D axisymmetric or fully 3D models. These capabilities include temperature-dependent thermal conductivity of fuel; swelling and densification; fuel creep; pellet fracture; fission gas release; cladding creep; irradiation growth; and gap mechanics (contact and gap heat transfer). The need for multiphysics, multidimensional modeling is then demonstrated through a discussion of results for a set of example problems. The first, a 10-pellet rodlet, demonstrates the viability of the solution method employed. This example highlights the effect of our smeared cracking model and also shows the multidimensional nature of discrete fuel pellet modeling. The second example relies on our the multidimensional, multiphysics approach to analyze a missing pellet surface problem. As a final example, we show a lower-length-scale simulation coupled to a continuum-scale simulation.
Mathematical and algorithmic issues in multiphysics coupling.
Gai, Xiuli; Stone, Charles Michael; Wheeler, Mary Fanett
2004-06-01
The modeling of fluid/structure interaction is of growing importance in both energy and environmental applications. Because of the inherent complexity, these problems must be simulated on parallel machines in order to achieve high resolution. The purpose of this research was to investigate techniques for coupling flow and geomechanics in porous media that are suitable for parallel computation. In particular, our main objective was to develop an iterative technique which can be as accurate as a fully coupled model but which allows for robust and efficient coupling of existing complex models (software). A parallel linear elastic module was developed which was coupled to a three phase three-component black oil model in IPARS (Integrated Parallel Accurate Reservoir Simulator). An iterative de-coupling technique was introduced at each time step. The resulting nonlinear iteration involved solving for displacements and flow sequentially. Rock compressibility was used in the flow model to account for the effect of deformation on the pore volume. Convergence was achieved when the mass balance for each component satisfied a given tolerance. This approach was validated by comparison with a fully coupled approach implemented in the British PetroledAmoco ACRES simulator. Another objective of this work was to develop an efficient parallel solver for the elasticity equations. A preconditioned conjugate gradient solver was implemented to solve the algebraic system arising from tensor product linear Galerkin approximations for the displacements. Three preconditioners were developed: LSOR (line successive over-relaxation), block Jacobi, and agglomeration multi-grid. The latter approach involved coarsening the 3D system to 2D and using LSOR as a smoother that is followed by applying geometric multi-grid with SOR (successive over-relaxation) as a smoother. Preliminary tests on a 64-node Beowulf cluster at CSM indicate that the agglomeration multi-grid approach is robust and efficient.
Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.
Lithium-Ion Battery Safety Study Using Multi-Physics Internal Short-Circuit Model (Presentation)
Kim, G-.H.; Smith, K.; Pesaran, A.
2009-06-01
This presentation outlines NREL's multi-physics simulation study to characterize an internal short by linking and integrating electrochemical cell, electro-thermal, and abuse reaction kinetics models.
Solid Rocket Motor Combustion Instability Modeling in COMSOL Multiphysics
NASA Technical Reports Server (NTRS)
Fischbach, S. R.
2015-01-01
Combustion instability modeling of Solid Rocket Motors (SRM) remains a topic of active research. Many rockets display violent fluctuations in pressure, velocity, and temperature originating from the complex interactions between the combustion process, acoustics, and steady-state gas dynamics. Recent advances in defining the energy transport of disturbances within steady flow-fields have been applied by combustion stability modelers to improve the analysis framework. Employing this more accurate global energy balance requires a higher fidelity model of the SRM flow-field and acoustic mode shapes. The current industry standard analysis tool utilizes a one dimensional analysis of the time dependent fluid dynamics along with a quasi-three dimensional propellant grain regression model to determine the SRM ballistics. The code then couples with another application that calculates the eigenvalues of the one dimensional homogenous wave equation. The mean flow parameters and acoustic normal modes are coupled to evaluate the stability theory developed and popularized by Culick. The assumption of a linear, non-dissipative wave in a quiescent fluid remains valid while acoustic amplitudes are small and local gas velocities stay below Mach 0.2. The current study employs the COMSOL Multiphysics finite element framework to model the steady flow-field parameters and acoustic normal modes of a generic SRM. This work builds upon previous efforts to verify the use of the acoustic velocity potential equation (AVPE) laid out by Campos. The acoustic velocity potential (psi) describing the acoustic wave motion in the presence of an inhomogeneous steady high-speed flow is defined by, del squared psi - (lambda/c) squared psi - M x [M x del((del)(psi))] - 2((lambda)(M)/c + M x del(M) x (del)(psi) - 2(lambda)(psi)[M x del(1/c)] = 0. with M as the Mach vector, c as the speed of sound, and ? as the complex eigenvalue. The study requires one way coupling of the CFD High Mach Number Flow (HMNF
Advanced multiphysics coupling for LWR fuel performance analysis
Hales, J. D.; Tonks, M. R.; Gleicher, F. N.; ...
2015-10-01
Even the most basic nuclear fuel analysis is a multiphysics undertaking, as a credible simulation must consider at a minimum coupled heat conduction and mechanical deformation. The need for more realistic fuel modeling under a variety of conditions invariably leads to a desire to include coupling between a more complete set of the physical phenomena influencing fuel behavior, including neutronics, thermal hydraulics, and mechanisms occurring at lower length scales. This paper covers current efforts toward coupled multiphysics LWR fuel modeling in three main areas. The first area covered in this paper concerns thermomechanical coupling. The interaction of these two physics,more » particularly related to the feedback effect associated with heat transfer and mechanical contact at the fuel/clad gap, provides numerous computational challenges. An outline is provided of an effective approach used to manage the nonlinearities associated with an evolving gap in BISON, a nuclear fuel performance application. A second type of multiphysics coupling described here is that of coupling neutronics with thermomechanical LWR fuel performance. DeCART, a high-fidelity core analysis program based on the method of characteristics, has been coupled to BISON. DeCART provides sub-pin level resolution of the multigroup neutron flux, with resonance treatment, during a depletion or a fast transient simulation. Two-way coupling between these codes was achieved by mapping fission rate density and fast neutron flux fields from DeCART to BISON and the temperature field from BISON to DeCART while employing a Picard iterative algorithm. Finally, the need for multiscale coupling is considered. Fission gas production and evolution significantly impact fuel performance by causing swelling, a reduction in the thermal conductivity, and fission gas release. The mechanisms involved occur at the atomistic and grain scale and are therefore not the domain of a fuel performance code. However, it is
Multi-Physics Analysis of the Fermilab Booster RF Cavity
Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.; /Fermilab
2012-05-14
After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.
Advanced multiphysics coupling for LWR fuel performance analysis
Hales, J. D.; Tonks, M. R.; Gleicher, F. N.; Spencer, B. W.; Novascone, S. R.; Williamson, R. L.; Pastore, G.; Perez, D. M.
2015-10-01
Even the most basic nuclear fuel analysis is a multiphysics undertaking, as a credible simulation must consider at a minimum coupled heat conduction and mechanical deformation. The need for more realistic fuel modeling under a variety of conditions invariably leads to a desire to include coupling between a more complete set of the physical phenomena influencing fuel behavior, including neutronics, thermal hydraulics, and mechanisms occurring at lower length scales. This paper covers current efforts toward coupled multiphysics LWR fuel modeling in three main areas. The first area covered in this paper concerns thermomechanical coupling. The interaction of these two physics, particularly related to the feedback effect associated with heat transfer and mechanical contact at the fuel/clad gap, provides numerous computational challenges. An outline is provided of an effective approach used to manage the nonlinearities associated with an evolving gap in BISON, a nuclear fuel performance application. A second type of multiphysics coupling described here is that of coupling neutronics with thermomechanical LWR fuel performance. DeCART, a high-fidelity core analysis program based on the method of characteristics, has been coupled to BISON. DeCART provides sub-pin level resolution of the multigroup neutron flux, with resonance treatment, during a depletion or a fast transient simulation. Two-way coupling between these codes was achieved by mapping fission rate density and fast neutron flux fields from DeCART to BISON and the temperature field from BISON to DeCART while employing a Picard iterative algorithm. Finally, the need for multiscale coupling is considered. Fission gas production and evolution significantly impact fuel performance by causing swelling, a reduction in the thermal conductivity, and fission gas release. The mechanisms involved occur at the atomistic and grain scale and are therefore not the domain of a fuel performance code. However, it is possible to use
IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.
Ha, Vi Q; Lykotrafitis, George
2016-12-08
We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations.
Modeling of the thermal comfort in vehicles using COMSOL multiphysics
NASA Astrophysics Data System (ADS)
Gavrila, Camelia; Vartires, Andreea
2016-12-01
The environmental quality in vehicles is a very important aspect of building design and evaluation of the influence of the thermal comfort inside the car for ensuring a safe trip. The aim of this paper is to modeling and simulating the thermal comfort inside the vehicles, using COMSOL Multiphysics program, for different ventilation grilles. The objective will be the implementing innovative air diffusion grilles in a prototype vehicle. The idea behind this goal is to introduce air diffusers with a special geometry allowing improving mixing between the hot or the cold conditioned air introduced in the cockpit and the ambient.
High-Fidelity Space-Time Adaptive Multiphysics Simulations in Nuclear Engineering
Solin, Pavel; Ragusa, Jean
2014-03-09
We delivered a series of fundamentally new computational technologies that have the potential to significantly advance the state-of-the-art of computer simulations of transient multiphysics nuclear reactor processes. These methods were implemented in the form of a C++ library, and applied to a number of multiphysics coupled problems relevant to nuclear reactor simulations.
Plasma Simulation in the Multiphysics Object Oriented Simulation Environment MOOSE
NASA Astrophysics Data System (ADS)
Shannon, Steven; Lindsay, Alex; Graves, David; Icenhour, Casey; Peterson, David; White, Scott
2016-09-01
MOOSE is an open source multiphysics solver developed by Idaho National Laboratory that is primarily used for the simulation of fission reactor systems; the framework is also well suited for the simulation of plasma systems given the development of appropriate modules not currently developed in the framework such as electromagnetic solvers, Boltzmann solvers, etc. It is structured for user development of application specific modules and is intended for both workstation level and high performance massively parallel environments. We have begun the development of plasma modules in the MOOSE environment and carried out preliminary simulation of the plasma/liquid interface to elucidate coupling mechanisms between these states using a fully coupled multiphysics model; these results agree well with PIC simulation of the same system and show strong response of plasma parameters with respect to electron reflection at the liquid surface. These results will be presented along with an overview of MOOSE and ongoing module development to extend capabilities to a broader set of research challenges in low temperature plasmas, with particular focus on RF and pulsed RF driven systems.
Multiphysics modeling of the steel continuous casting process
NASA Astrophysics Data System (ADS)
Hibbeler, Lance C.
This work develops a macroscale, multiphysics model of the continuous casting of steel. The complete model accounts for the turbulent flow and nonuniform distribution of superheat in the molten steel, the elastic-viscoplastic thermal shrinkage of the solidifying shell, the heat transfer through the shell-mold interface with variable gap size, and the thermal distortion of the mold. These models are coupled together with carefully constructed boundary conditions with the aid of reduced-order models into a single tool to investigate behavior in the mold region, for practical applications such as predicting ideal tapers for a beam-blank mold. The thermal and mechanical behaviors of the mold are explored as part of the overall modeling effort, for funnel molds and for beam-blank molds. These models include high geometric detail and reveal temperature variations on the mold-shell interface that may be responsible for cracks in the shell. Specifically, the funnel mold has a column of mold bolts in the middle of the inside-curve region of the funnel that disturbs the uniformity of the hot face temperatures, which combined with the bending effect of the mold on the shell, can lead to longitudinal facial cracks. The shoulder region of the beam-blank mold shows a local hot spot that can be reduced with additional cooling in this region. The distorted shape of the funnel mold narrow face is validated with recent inclinometer measurements from an operating caster. The calculated hot face temperatures and distorted shapes of the mold are transferred into the multiphysics model of the solidifying shell. The boundary conditions for the first iteration of the multiphysics model come from reduced-order models of the process; one such model is derived in this work for mold heat transfer. The reduced-order model relies on the physics of the solution to the one-dimensional heat-conduction equation to maintain the relationships between inputs and outputs of the model. The geometric
Solid Oxide Fuel Cell - Multi-Physics and GUI
2013-10-10
SOFC-MP is a simulation tool developed at PNNL to evaluate the tightly coupled multi-physical phenomena in SOFCs. The purpose of the tool is to allow SOFC manufacturers to numerically test changes in planar stack design to meet DOE technical targets. The SOFC-MP 2D module is designed for computational efficiency to enable rapid engineering evaluations for operation of tall symmetric stacks. It can quickly compute distributions for the current density, voltage, temperature, and species composition in tall stacks with co-flow or counter-flow orientations. The 3D module computes distributions in entire 3D domain and handles all planner configurations: co-flow, counter-flow, and cross-flow. The detailed data from 3D simulation can be used as input for structural analysis. SOFC-MP GUI integrates both 2D and 3D modules, and it provides user friendly pre-processing and post-processing capabilities.
Multiscale Multiphysics Developments for Accident Tolerant Fuel Concepts
Gamble, K. A.; Hales, J. D.; Yu, J.; Zhang, Y.; Bai, X.; Andersson, D.; Patra, A.; Wen, W.; Tome, C.; Baskes, M.; Martinez, E.; Stanek, C. R.; Miao, Y.; Ye, B.; Hofman, G. L.; Yacout, A. M.; Liu, W.
2015-09-01
U_{3}Si_{2} and iron-chromium-aluminum (Fe-Cr-Al) alloys are two of many proposed accident-tolerant fuel concepts for the fuel and cladding, respectively. The behavior of these materials under normal operating and accident reactor conditions is not well known. As part of the Department of Energy’s Accident Tolerant Fuel High Impact Problem program significant work has been conducted to investigate the U_{3}Si_{2} and FeCrAl behavior under reactor conditions. This report presents the multiscale and multiphysics effort completed in fiscal year 2015. The report is split into four major categories including Density Functional Theory Developments, Molecular Dynamics Developments, Mesoscale Developments, and Engineering Scale Developments. The work shown here is a compilation of a collaborative effort between Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory and Anatech Corp.
Multilingual interfaces for parallel coupling in multiphysics and multiscale systems.
Ong, E. T.; Larson, J. W.; Norris, B.; Jacob, R. L.; Tobis, M.; Steder, M.; Mathematics and Computer Science; Univ. of Wisconsin; Australian National Univ.; Univ. of Chicago
2007-01-01
Multiphysics and multiscale simulation systems are emerging as a new grand challenge in computational science, largely because of increased computing power provided by the distributed-memory parallel programming model on commodity clusters. These systems often present a parallel coupling problem in their intercomponent data exchanges. Another potential problem in these coupled systems is language interoperability between their various constituent codes. In anticipation of combined parallel coupling/language interoperability challenges, we have created a set of interlanguage bindings for a successful parallel coupling library, the Model Coupling Toolkit. We describe the method used for automatically generating the bindings using the Babel language interoperability tool, and illustrate with short examples how MCT can be used from the C++ and Python languages. We report preliminary performance reports for the MCT interpolation benchmark. We conclude with a discussion of the significance of this work to the rapid prototyping of large parallel coupled systems.
Multiphysics of ionic polymer-metal composite actuator
NASA Astrophysics Data System (ADS)
Zhu, Zicai; Asaka, Kinji; Chang, Longfei; Takagi, Kentaro; Chen, Hualing
2013-08-01
Water-based ionic polymer-metal composites (IPMCs) exhibit complex deformation properties, especially with decreasing water content. Based on our experimental understanding, we developed a systemic actuation mechanism for IPMCs in which the water swelling was taken as the basic cause of deformation. We focused on Nafion-IPMC, and formulated a multiphysical model to describe the complicated deformation properties. The model emphasizes pressure-induced convection fluxes and the significance of the water distribution on deformation. It shows that there are three eigen stresses activated by the migration of ions and water, namely, osmotic pressure, electrostatic stress, and capillary pressure. The model also provides a convenient way of simultaneously handling the internal eigen stresses and the external mechanical load. In this paper, we used a fundamental model, which only considered the hydrostatic pressure in the multiphysical model, to analyze the general transport properties of cations and water by numerical methods. Three effects were investigated: (1) the inter-coupling effects between cations and water, which slow down cation migration and attenuate the back-diffusion of water; (2) the pressure effect, which rarely influences the electric field and the cation distribution, but greatly changes the profile of the water concentration and then the deformation behavior; and (3) the hydration effect, which has a significant impact on the distribution profiles of the cations and the electrical potential. In contrast to the findings of traditional studies, the water concentration displays an almost uniform gradient across the thickness in the bulk, and the cation concentration at the cathode is greatly reduced by the volume effect of the hydrated cations.
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton-Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
Assessment of PCMI Simulation Using the Multidimensional Multiphysics BISON Fuel Performance Code
Stephen R. Novascone; Jason D. Hales; Benjamin W. Spencer; Richard L. Williamson
2012-09-01
irradiation level, while the power at the top of the rod is at about 20% of the base irradiation power level. 2D BISON simulations of the Bump Test GE7 were run using both discrete and smeared pellet geometry. Comparisons between these calculations and experimental measurements are presented for clad diameter and elongation after the base irradiation and clad profile along the length of the test section after the bump test. Preliminary comparisons between calculations and measurements are favorable, supporting the use of BISON as an accurate multiphysics fuel simulation tool.
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
NASA Technical Reports Server (NTRS)
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition
An introduction to LIME 1.0 and its use in coupling codes for multiphysics simulations.
Belcourt, Noel; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-11-01
LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.
Multiscale Multiphysics and Multidomain Models I: Basic Theory.
Wei, Guo-Wei
2013-12-01
This work extends our earlier two-domain formulation of a differential geometry based multiscale paradigm into a multidomain theory, which endows us the ability to simultaneously accommodate multiphysical descriptions of aqueous chemical, physical and biological systems, such as fuel cells, solar cells, nanofluidics, ion channels, viruses, RNA polymerases, molecular motors and large macromolecular complexes. The essential idea is to make use of the differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain of solvent from the microscopic domain of solute, and dynamically couple continuum and discrete descriptions. Our main strategy is to construct energy functionals to put on an equal footing of multiphysics, including polar (i.e., electrostatic) solvation, nonpolar solvation, chemical potential, quantum mechanics, fluid mechanics, molecular mechanics, coarse grained dynamics and elastic dynamics. The variational principle is applied to the energy functionals to derive desirable governing equations, such as multidomain Laplace-Beltrami (LB) equations for macromolecular morphologies, multidomain Poisson-Boltzmann (PB) equation or Poisson equation for electrostatic potential, generalized Nernst-Planck (NP) equations for the dynamics of charged solvent species, generalized Navier-Stokes (NS) equation for fluid dynamics, generalized Newton's equations for molecular dynamics (MD) or coarse-grained dynamics and equation of motion for elastic dynamics. Unlike the classical PB equation, our PB equation is an integral-differential equation due to solvent-solute interactions. To illustrate the proposed formalism, we have explicitly constructed three models, a multidomain solvation model, a multidomain charge transport model and a multidomain chemo-electro-fluid-MD-elastic model. Each solute domain is equipped with distinct surface tension, pressure, dielectric function, and charge density distribution. In addition to long
Multiscale Multiphysics and Multidomain Models I: Basic Theory
Wei, Guo-Wei
2013-01-01
This work extends our earlier two-domain formulation of a differential geometry based multiscale paradigm into a multidomain theory, which endows us the ability to simultaneously accommodate multiphysical descriptions of aqueous chemical, physical and biological systems, such as fuel cells, solar cells, nanofluidics, ion channels, viruses, RNA polymerases, molecular motors and large macromolecular complexes. The essential idea is to make use of the differential geometry theory of surfaces as a natural means to geometrically separate the macroscopic domain of solvent from the microscopic domain of solute, and dynamically couple continuum and discrete descriptions. Our main strategy is to construct energy functionals to put on an equal footing of multiphysics, including polar (i.e., electrostatic) solvation, nonpolar solvation, chemical potential, quantum mechanics, fluid mechanics, molecular mechanics, coarse grained dynamics and elastic dynamics. The variational principle is applied to the energy functionals to derive desirable governing equations, such as multidomain Laplace-Beltrami (LB) equations for macromolecular morphologies, multidomain Poisson-Boltzmann (PB) equation or Poisson equation for electrostatic potential, generalized Nernst-Planck (NP) equations for the dynamics of charged solvent species, generalized Navier-Stokes (NS) equation for fluid dynamics, generalized Newton's equations for molecular dynamics (MD) or coarse-grained dynamics and equation of motion for elastic dynamics. Unlike the classical PB equation, our PB equation is an integral-differential equation due to solvent-solute interactions. To illustrate the proposed formalism, we have explicitly constructed three models, a multidomain solvation model, a multidomain charge transport model and a multidomain chemo-electro-fluid-MD-elastic model. Each solute domain is equipped with distinct surface tension, pressure, dielectric function, and charge density distribution. In addition to long
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL
Episodic Tremor and Slip (ETS) as a chaotic multiphysics spring
NASA Astrophysics Data System (ADS)
Veveakis, E.; Alevizos, S.; Poulet, T.
2017-03-01
Episodic Tremor and Slip (ETS) events display a rich behaviour of slow and accelerated slip with simple oscillatory to complicated chaotic time series. It is commonly believed that the fast events appearing as non volcanic tremors are signatures of deep fluid injection. The fluid source is suggested to be related to the breakdown of hydrous phyllosilicates, mainly the serpentinite group minerals such as antigorite or lizardite that are widespread in the top of the slab in subduction environments. Similar ETS sequences are recorded in different lithologies in exhumed crustal carbonate-rich thrusts where the fluid source is suggested to be the more vigorous carbonate decomposition reaction. If indeed both types of events can be understood and modelled by the same generic fluid release reaction AB(solid) ⇌A(solid) +B(fluid) , the data from ETS sequences in subduction zones reveal a geophysically tractable temporal evolution with no access to the fault zone. This work reviews recent advances in modelling ETS events considering the multiphysics instabilities triggered by the fluid release reaction and develops a thermal-hydraulic-mechanical-chemical oscillator (THMC spring) model for such mineral reactions (like dehydration and decomposition) in Megathrusts. We describe advanced computational methods for THMC instabilities and discuss spectral element and finite element solutions. We apply the presented numerical methods to field examples of this important mechanism and reproduce the temporal signature of the Cascadia and Hikurangi trench with a serpentinite oscillator.
Multiphysics methods development for high temperature gas reactor analysis
NASA Astrophysics Data System (ADS)
Seker, Volkan
Multiphysics computational methods were developed to perform design and safety analysis of the next generation Pebble Bed High Temperature Gas Cooled Reactors. A suite of code modules was developed to solve the coupled thermal-hydraulics and neutronics field equations. The thermal-hydraulics module is based on the three dimensional solution of the mass, momentum and energy equations in cylindrical coordinates within the framework of the porous media method. The neutronics module is a part of the PARCS (Purdue Advanced Reactor Core Simulator) code and provides a fine mesh finite difference solution of the neutron diffusion equation in three dimensional cylindrical coordinates. Coupling of the two modules was performed by mapping the solution variables from one module to the other. Mapping is performed automatically in the code system by the use of a common material mesh in both modules. The standalone validation of the thermal-hydraulics module was performed with several cases of the SANA experiment and the standalone thermal-hydraulics exercise of the PBMR-400 benchmark problem. The standalone neutronics module was validated by performing the relevant exercises of the PBMR-268 and PBMR-400 benchmark problems. Additionally, the validation of the coupled code system was performed by analyzing several steady state and transient cases of the OECD/NEA PBMR-400 benchmark problem.
A General Framework for Multiphysics Modeling Based on Numerical Averaging
NASA Astrophysics Data System (ADS)
Lunati, I.; Tomin, P.
2014-12-01
In the last years, multiphysics (hybrid) modeling has attracted increasing attention as a tool to bridge the gap between pore-scale processes and a continuum description at the meter-scale (laboratory scale). This approach is particularly appealing for complex nonlinear processes, such as multiphase flow, reactive transport, density-driven instabilities, and geomechanical coupling. We present a general framework that can be applied to all these classes of problems. The method is based on ideas from the Multiscale Finite-Volume method (MsFV), which has been originally developed for Darcy-scale application. Recently, we have reformulated MsFV starting with a local-global splitting, which allows us to retain the original degree of coupling for the local problems and to use spatiotemporal adaptive strategies. The new framework is based on the simple idea that different characteristic temporal scales are inherited from different spatial scales, and the global and the local problems are solved with different temporal resolutions. The global (coarse-scale) problem is constructed based on a numerical volume-averaging paradigm and a continuum (Darcy-scale) description is obtained by introducing additional simplifications (e.g., by assuming that pressure is the only independent variable at the coarse scale, we recover an extended Darcy's law). We demonstrate that it is possible to adaptively and dynamically couple the Darcy-scale and the pore-scale descriptions of multiphase flow in a single conceptual and computational framework. Pore-scale problems are solved only in the active front region where fluid distribution changes with time. In the rest of the domain, only a coarse description is employed. This framework can be applied to other important problems such as reactive transport and crack propagation. As it is based on a numerical upscaling paradigm, our method can be used to explore the limits of validity of macroscopic models and to illuminate the meaning of
Modelling transport phenomena in a multi-physics context
NASA Astrophysics Data System (ADS)
Marra, Francesco
2015-01-01
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Modelling transport phenomena in a multi-physics context
Marra, Francesco
2015-01-22
Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.
Derek Gaston; Luanjing Guo; Glen Hansen; Hai Huang; Richard Johnson; Dana Knoll; Chris Newman; Hyeong Kae Park; Robert Podgorney; Michael Tonks; Richard Williamson
2012-09-01
This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy, and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.
Numerical methods for multiphysics, multiphase, and multicomponent models for fuel cells
NASA Astrophysics Data System (ADS)
Xue, Guangri
In this dissertation, we design and analyze efficient numerical methods for obtaining accurate solutions to model problems arising in fuel cells. A basic fuel cell model consists of five principles of conservation, namely, mass, momentum, species, charges (electrons and ions), and thermal energy. Overall, transport equations couple with electrochemical processes through source terms to describe reaction kinetics and electro-osmotic drag in the polymer electrolyte. To model multiphase species transport in the porous media and the gas channel of fuel cells, we consider a multiphase mixture model framework. The diffusivity of the two-phase mixture water conservation equation in this model is nonlinear, discontinuous, and degenerate. To handle this difficulty, we developed efficient and fast nonlinear iterative solvers based on the Kirchhoff transformation and nonlinear Dirichlet-Neumann domain decomposition methods. To model the coupling between the multiphase flow in the porous media and the viscous flow in the gas channel of fuel cells, we consider the Darcy-Stokes-Brinkman model, which treats both the Darcy equation and the Stokes equation in a single form of partial differential equation (PDE) but with strongly discontinuous viscosity and permeability coefficients. For this model, we develop robust finite element methods that are uniformly stable with respect to the highly discontinuous coefficients and their jumps. Finally, we develop new numerical methods for the full steady-state 3D multi-physics simulation of liquid-feed direct methanol fuel cells (DMFC), consisting of five fundamental conservation equations: mass, momentum, species, charges, and thermal energy. Fast convergence of nonlinear iteration is achieved in our method.
Jean C. Ragusa; Vijay Mahadevan; Vincent A. Mousseau
2009-05-01
High-fidelity modeling of nuclear reactors requires the solution of a nonlinear coupled multi-physics stiff problem with widely varying time and length scales that need to be resolved correctly. A numerical method that converges the implicit nonlinear terms to a small tolerance is often referred to as nonlinearly consistent (or tightly coupled). This nonlinear consistency is still lacking in the vast majority of coupling techniques today. We present a tightly coupled multiphysics framework that tackles this issue and present code-verification and convergence analyses in space and time for several models of nonlinear coupled physics.
Saad, Tony; Sutherland, James C.
2016-05-04
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinary team of engineers and computer scientists.
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
2016-12-05
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
Efficient High-Fidelity, Geometrically Exact, Multiphysics Structural Models
2011-10-14
through-thickness analysis is implemented using a 1D finite element discretization in the computer program VAPAS, which has direct connection with the...potential and feasibility of these new concepts, the designers must be equipped with a versatile computational design framework to accurately analyze...systematically obtain an effective plate model unifying a homogenization process and a dimensional reduction process. This approach is implemented in the computer
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg; Jonkman, Bonnie; Purkayastha, Avi
2017-01-01
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevant physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.
Gasmi, A.; Sprague, M. A.; Jonkman, J. M.; Jones, W. B.
2013-02-01
In this paper we examine the stability and accuracy of numerical algorithms for coupling time-dependent multi-physics modules relevant to computer-aided engineering (CAE) of wind turbines. This work is motivated by an in-progress major revision of FAST, the National Renewable Energy Laboratory's (NREL's) premier aero-elastic CAE simulation tool. We employ two simple examples as test systems, while algorithm descriptions are kept general. Coupled-system governing equations are framed in monolithic and partitioned representations as differential-algebraic equations. Explicit and implicit loose partition coupling is examined. In explicit coupling, partitions are advanced in time from known information. In implicit coupling, there is dependence on other-partition data at the next time step; coupling is accomplished through a predictor-corrector (PC) approach. Numerical time integration of coupled ordinary-differential equations (ODEs) is accomplished with one of three, fourth-order fixed-time-increment methods: Runge-Kutta (RK), Adams-Bashforth (AB), and Adams-Bashforth-Moulton (ABM). Through numerical experiments it is shown that explicit coupling can be dramatically less stable and less accurate than simulations performed with the monolithic system. However, PC implicit coupling restored stability and fourth-order accuracy for ABM; only second-order accuracy was achieved with RK integration. For systems without constraints, explicit time integration with AB and explicit loose coupling exhibited desired accuracy and stability.
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.
NASA Astrophysics Data System (ADS)
Di Luca, Alejandro; Flaounas, Emmanouil; Drobinski, Philippe; Brossier, Cindy Lebeaupin
2014-11-01
The use of high resolution atmosphere-ocean coupled regional climate models to study possible future climate changes in the Mediterranean Sea requires an accurate simulation of the atmospheric component of the water budget (i.e., evaporation, precipitation and runoff). A specific configuration of the version 3.1 of the weather research and forecasting (WRF) regional climate model was shown to systematically overestimate the Mediterranean Sea water budget mainly due to an excess of evaporation (~1,450 mm yr-1) compared with observed estimations (~1,150 mm yr-1). In this article, a 70-member multi-physics ensemble is used to try to understand the relative importance of various sub-grid scale processes in the Mediterranean Sea water budget and to evaluate its representation by comparing simulated results with observed-based estimates. The physics ensemble was constructed by performing 70 1-year long simulations using version 3.3 of the WRF model by combining six cumulus, four surface/planetary boundary layer and three radiation schemes. Results show that evaporation variability across the multi-physics ensemble (˜10 % of the mean evaporation) is dominated by the choice of the surface layer scheme that explains more than ˜70 % of the total variance and that the overestimation of evaporation in WRF simulations is generally related with an overestimation of surface exchange coefficients due to too large values of the surface roughness parameter and/or the simulation of too unstable surface conditions. Although the influence of radiation schemes on evaporation variability is small (˜13 % of the total variance), radiation schemes strongly influence exchange coefficients and vertical humidity gradients near the surface due to modifications of temperature lapse rates. The precipitation variability across the physics ensemble (˜35 % of the mean precipitation) is dominated by the choice of both cumulus (˜55 % of the total variance) and planetary boundary layer (˜32 % of
Joda, Akram; Jin, Zhongmin; Haverich, Axel; Summers, Jon; Korossis, Sotirios
2016-08-16
This study developed a realistic 3D FSI computational model of the aortic valve using the fixed-grid method, which was eventually employed to investigate the effect of the leaflet thickness inhomogeneity and leaflet mechanical nonlinearity and anisotropy on the simulation results. The leaflet anisotropy and thickness inhomogeneity were found to significantly affect the valve stress-strain distribution. However, their effect on valve dynamics and fluid flow through the valve were minor. Comparison of the simulation results against in-vivo and in-vitro data indicated good agreement between the computational models and experimental data. The study highlighted the importance of simulating multi-physics phenomena (such as fluid flow and structural deformation), regional leaflet thickness inhomogeneity and anisotropic nonlinear mechanical properties, to accurately predict the stress-strain distribution on the natural aortic valve.
COMSOL-based Multiphysics Simulations to Support HFIR s Conversion to LEU Fuel
Jain, Prashant K; Freels, James D; Cook, David Howard
2011-01-01
In this paper, development of at least one form of the COMSOL-based modeling framework for the HFIR is presented, key simulation steps are identified and several milestones achieved towards a coupled multi-physics capability are highlighted. COMSOL-based multi-physics simulation capability is able to answer the need for predictive 3D simulations of HFIR s involute plate and channels. Step-by-step development and analyses of the COMSOL models for the single and multi-channels will lead towards the desired full-core simulation capability for the HFIR. With very few experiments planned to support the conversion process, these 3D simulations will become the basis for the nuclear safety analysis of the HFIR s LEU fuel core.
Multiphysics design optimization for aerospace applications: Case study on helicopter loading hanger
NASA Astrophysics Data System (ADS)
Xue, Hui; Khawaja, H.; Moatamedi, M.
2014-12-01
This paper presents the Multiphysics technique applied in the design optimization of a loading hanger for an aerial crane. In this study, design optimization is applied on the geometric modelling of a part being used in an aerial crane operation. A set of dimensional and loading requirements are provided. Various geometric models are built using SolidWorks® Computer Aided Design (CAD) Package. In addition, Finite Element Method (FEM) is applied to study these geometric models using ANSYS® Multiphysics package. Appropriate material is chosen based on the strength to weight ratio. Efforts are made to optimize the geometry to reduce the weight of the part. Based on the achieved results, conclusions are drawn.
NASA Astrophysics Data System (ADS)
Rendon-Hernandez, Adrian; Basrour, Skandar
2016-11-01
This paper deals with the coupled multiphysics finite element modeling and the experimental testing of a thermo-magnetically triggered piezoelectric generator. The model presented here, which has been developed in ANSYS software and experimentally validated, promotes a better understanding of the dynamic behavior of proposed generator. Special attention was put into the coupled multiphysics interactions, for instance, the thermal-dependent demagnetization of soft magnetic material, the piezoelectric transduction and the output power. In order to characterize the power generator, many finite element simulations were conducted, included modal and transient analysis. To verify the effectiveness of the model, a prototype was built and tested. The findings thus obtained were compared with simulation results. Obtained results describe for the first time a fully coupled model of an innovative approach for thermomagnetic energy harvesting. Moreover, the total volume of our harvester (length × width × height: 20 × 4 × 2 mm) is 85 times lower than that of previous experimental harvester.
Saad, Tony; Sutherland, James C.
2016-05-04
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
Awida, M. H.; Gonin, I.; Passarelli, D.; Sukanov, A.; Khabiboulline, T.; Yakovlev, V.
2016-01-22
Multiphysics analyses for superconducting cavities are essential in the course of cavity design to meet stringent requirements on cavity frequency detuning. Superconducting RF cavities are the core accelerating elements in modern particle accelerators whether it is proton or electron machine, as they offer extremely high quality factors thus reducing the RF losses per cavity. However, the superior quality factor comes with the challenge of controlling the resonance frequency of the cavity within few tens of hertz bandwidth. In this paper, we investigate how the multiphysics analysis plays a major role in proactively minimizing sources of frequency detuning, specifically; microphonics and Lorentz Force Detuning (LFD) in the stage of RF design of the cavity and mechanical design of the niobium shell and the helium vessel.
Verification of a Multiphysics Toolkit against the Magnetized Target Fusion Concept
NASA Technical Reports Server (NTRS)
Thomas, Scott; Perrell, Eric; Liron, Caroline; Chiroux, Robert; Cassibry, Jason; Adams, Robert B.
2005-01-01
In the spring of 2004 the Advanced Concepts team at MSFC embarked on an ambitious project to develop a suite of modeling routines that would interact with one another. The tools would each numerically model a portion of any advanced propulsion system. The tools were divided by physics categories, hence the name multiphysics toolset. Currently most of the anticipated modeling tools have been created and integrated. Results are given in this paper for both a quarter nozzle with chemically reacting flow and the interaction of two plasma jets representative of a Magnetized Target Fusion device. The results have not been calibrated against real data as of yet, but this paper demonstrates the current capability of the multiphysics tool and planned future enhancements
Applications of ANSYS/Multiphysics at NASA/Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Loughlin, Jim
2007-01-01
This viewgraph presentation reviews some of the uses that the ANSYS/Multiphysics system is used for at the NASA Goddard Space Flight Center. Some of the uses of the ANSYS system is used for is MEMS Structural Analysis of Micro-mirror Array for the James Web Space Telescope (JWST), Micro-shutter Array for JWST, MEMS FP Tunable Filter, AstroE2 Micro-calorimeter. Various views of these projects are shown in this presentation.
NREL Multiphysics Modeling Tools and ISC Device for Designing Safer Li-Ion Batteries
Pesaran, Ahmad A.; Yang, Chuanbo
2016-03-24
The National Renewable Energy Laboratory has developed a portfolio of multiphysics modeling tools to aid battery designers better understand the response of lithium ion batteries to abusive conditions. We will discuss this portfolio, which includes coupled electrical, thermal, chemical, electrochemical, and mechanical modeling. These models can simulate the response of a cell to overheating, overcharge, mechanical deformation, nail penetration, and internal short circuit. Cell-to-cell thermal propagation modeling will be discussed.
A multi-physical model of actuation response in dielectric gels
NASA Astrophysics Data System (ADS)
Li, Bo; Chang, LongFei; Asaka, Kinji; Chen, Hualing; Li, Dichen
2016-12-01
Actuation deformation of a dielectric gel is attributed to: the solvent diffusion, the electrical polarization and material hyperelasticity. A multi-physical model, coupling electrical and mechanical quantities, is established, based on the thermodynamics. A set of constitutive relations is derived as an equation of state for characterization. The model is applied to specific cases as effective validations. Physical and chemical parameters affect the performance of the gel, showing nonlinear deformation and instability. This model offers guidance for engineering application.
Experimental validation of opto-thermo-elastic modeling in OOFELIE multiphysics
NASA Astrophysics Data System (ADS)
Mazzoli, Alexandra; Saint-Georges, Philippe; Orban, Anne; Ruess, Jean-Sébastien; Loicq, Jérôme; Barbier, Christian; Stockman, Yvan; Georges, Marc; Nachtergaele, Philippe; Paquay, Stéphane; De Vincenzo, Pascal
2011-10-01
The objective of this work is to demonstrate the correlation between a simple laboratory test bench case and the predictions of the OOFELIE Multiphysics software in order to deduce modeling guidelines and improvements. For that purpose two optical systems have been analyzed. The first one is a spherical lens fixed in an aluminium barrel, which is the simplest structure found in an opto-mechanical system. In this study, material characteristics are assumed to be well known: BK7 and aluminium have been retained. Temperature variations between 0 and +60°C from ambient have been applied to the samples. The second system is a YAG laser bar heated by means of a dedicated oven. For the two test benches thermo-elastic distortions have been measured using a Fizeau interferometer. This sensor measures wavefront error in the range of 20 nm to 1 μm without physical contact with the opto-mechanical system. For the YAG bar, birefringence and polarization measurements have also been performed using a polarimetric bench. The tests results have been compared to the predictions obtained by OOFELIE Multiphysics which is a simulation software dedicated to multiphysics coupled problems involving optics, mechanics, thermal physics, electricity, electromagnetism, acoustics and hydrodynamics. From this comparison modeling guidelines have been issued with the aim of improving the accuracy of computed thermo-elastic distortions and their impact on the optical performances.
NASA Astrophysics Data System (ADS)
Rutqvist, Jonny; Tsang, Chin-Fu
2012-09-01
The site investigations at Yucca Mountain, Nevada, have provided us with an outstanding data set, one that has significantly advanced our knowledge of multiphysics processes in partially saturated fractured geological media. Such advancement was made possible, foremost, by substantial investments in multiyear field experiments that enabled the study of thermally driven multiphysics and testing of numerical models at a large spatial scale. The development of coupled-process models within the project have resulted in a number of new, advanced multiphysics numerical models that are today applied over a wide range of geoscientific research and geoengineering applications. Using such models, the potential impact of thermal-hydrological-mechanical (THM) multiphysics processes over the long-term (e.g., 10,000 years) could be predicted and bounded with some degree of confidence. The fact that the rock mass at Yucca Mountain is intensively fractured enabled continuum models to be used, although discontinuum models were also applied and are better suited for analyzing some issues, especially those related to predictions of rockfall within open excavations. The work showed that in situ tests (rather than small-scale laboratory experiments alone) are essential for determining appropriate input parameters for multiphysics models of fractured rocks, especially related to parameters defining how permeability might evolve under changing stress and temperature. A significant laboratory test program at Yucca Mountain also made important contributions to the field of rock mechanics, showing a unique relation between porosity and mechanical properties, a time dependency of strength that is significant for long-term excavation stability, a decreasing rock strength with sample size using very large core experiments, and a strong temperature dependency of the thermal expansion coefficient for temperatures up to 200°C. The analysis of in situ heater experiments showed that fracture
Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP
Downar, Thomas; Seker, Volkan
2013-04-30
Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.
Progress on the Multiphysics Capabilities of the Parallel Electromagnetic ACE3P Simulation Suite
Kononenko, Oleksiy
2015-03-26
ACE3P is a 3D parallel simulation suite that is being developed at SLAC National Accelerator Laboratory. Effectively utilizing supercomputer resources, ACE3P has become a key tool for the coupled electromagnetic, thermal and mechanical research and design of particle accelerators. Based on the existing finite-element infrastructure, a massively parallel eigensolver is developed for modal analysis of mechanical structures. It complements a set of the multiphysics tools in ACE3P and, in particular, can be used for the comprehensive study of microphonics in accelerating cavities ensuring the operational reliability of a particle accelerator.
Parallel adaptive Cartesian upwind methods for shock-driven multiphysics simulation
Deiterding, Ralf
2011-01-01
The multiphysics fluid-structure interaction simulation of shock-loaded thin-walled structures requires the dynamic coupling of a shock-capturing flow solver to a solid mechanics solver for large deformations. By combining a Cartesian embedded boundary approach with dynamic mesh adaptation a generic software framework for such flow solvers has been constructed that allows easy exchange of the specific hydrodynamic finite volume upwind scheme and coupling to various explicit finite element solid dynamics solvers. The paper gives an overview of the computational approach and presents first simulations that couple the software to the general purpose solid dynamics code DYNA3D.
Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem
Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.
2015-12-21
This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.
Design and multiphysics analysis of a 176Â MHz continuous-wave radio-frequency quadrupole
NASA Astrophysics Data System (ADS)
Kutsaev, S. V.; Mustapha, B.; Ostroumov, P. N.; Barcikowski, A.; Schrage, D.; Rodnizki, J.; Berkovits, D.
2014-07-01
We have developed a new design for a 176 MHz cw radio-frequency quadrupole (RFQ) for the SARAF upgrade project. At this frequency, the proposed design is a conventional four-vane structure. The main design goals are to provide the highest possible shunt impedance while limiting the required rf power to about 120 kW for reliable cw operation, and the length to about 4 meters. If built as designed, the proposed RFQ will be the first four-vane cw RFQ built as a single cavity (no resonant coupling required) that does not require π-mode stabilizing loops or dipole rods. For this, we rely on very detailed 3D simulations of all aspects of the structure and the level of machining precision achieved on the recently developed ATLAS upgrade RFQ. A full 3D model of the structure including vane modulation was developed. The design was optimized using electromagnetic and multiphysics simulations. Following the choice of the vane type and geometry, the vane undercuts were optimized to produce a flat field along the structure. The final design has good mode separation and should not need dipole rods if built as designed, but their effect was studied in the case of manufacturing errors. The tuners were also designed and optimized to tune the main mode without affecting the field flatness. Following the electromagnetic (EM) design optimization, a multiphysics engineering analysis of the structure was performed. The multiphysics analysis is a coupled electromagnetic, thermal and mechanical analysis. The cooling channels, including their paths and sizes, were optimized based on the limiting temperature and deformation requirements. The frequency sensitivity to the RFQ body and vane cooling water temperatures was carefully studied in order to use it for frequency fine-tuning. Finally, an inductive rf power coupler design based on the ATLAS RFQ coupler was developed and simulated. The EM design optimization was performed using cst Microwave Studio and the results were verified using
Analysis of Material Sample Heated by Impinging Hot Hydrogen Jet in a Non-Nuclear Tester
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Foote, John; Litchford, Ron
2006-01-01
A computational conjugate heat transfer methodology was developed and anchored with data obtained from a hot-hydrogen jet heated, non-nuclear materials tester, as a first step towards developing an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective and thermal radiative, and conjugate heat transfers. Predicted hot hydrogen jet and material surface temperatures were compared with those of measurement. Predicted solid temperatures were compared with those obtained with a standard heat transfer code. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.
Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.
2014-11-23
This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
High frequency electromagnetism, heat transfer and fluid flow coupling in ANSYS multiphysics.
Sabliov, Cristina M; Salvi, Deepti A; Boldor, Dorin
2007-01-01
The goal of this study was to numerically predict the temperature of a liquid product heated in a continuous-flow focused microwave system by coupling high frequency electromagnetism, heat transfer, and fluid flow in ANSYS Multiphysics. The developed model was used to determine the temperature change in water processed in a 915 MHz microwave unit, under steady-state conditions. The influence of the flow rates on the temperature distribution in the liquid was assessed. Results showed that the average temperature of water increased from 25 degrees C to 34 degrees C at 2 l/min, and to 42 degrees C at 1 l/min. The highest temperature regions were found in the liquid near the center of the tube, followed by progressively lower temperature regions as the radial distance from the center increased, and finally followed by a slightly higher temperature region near the tube's wall corresponding to the energy distribution given by the Mathieu function. The energy distribution resulted in a similar temperature pattern, with the highest temperatures close to the center of the tube and lower at the walls. The presented ANSYS Multiphysics model can be easily improved to account for complex boundary conditions, phase change, temperature dependent properties, and non-Newtonian flows, which makes for an objective of future studies.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Conductance Thin Film Model of Flexible Organic Thin Film Device using COMSOL Multiphysics
NASA Astrophysics Data System (ADS)
Carradero-Santiago, Carolyn; Vedrine-Pauléus, Josee
We developed a virtual model to analyze the electrical conductivity of multilayered thin films placed above a graphene conducting and flexible polyethylene terephthalate (PET) substrate. The organic layers of poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) as a hole conducting layer, poly(3-hexylthiophene-2,5-diyl) (P3HT), as a p-type, phenyl-C61-butyric acid methyl ester (PCBM) and as n-type, with aluminum as a top conductor. COMSOL Multiphysics was the software we used to develop the virtual model to analyze potential variations and conductivity through the thin-film layers. COMSOL Multiphysics software allows simulation and modeling of physical phenomena represented by differential equations such as heat transfer, fluid flow, electromagnetism, and structural mechanics. In this work, using the AC/DC, electric currents module we defined the geometry of the model and properties for each of the six layers: PET/graphene/PEDOT:PSS/P3HT/PCBM/aluminum. We analyzed the model with varying thicknesses of graphene and active layers (P3HT/PCBM). This simulation allowed us to analyze the electrical conductivity, and visualize the model with varying voltage potential, or bias across the plates, useful for applications in solar cell devices.
Advanced computations of multi-physics, multi-scale effects in beam dynamics
Amundson, J.F.; Macridin, A.; Spentzouris, P.; Stern, E.G.; /Fermilab
2009-01-01
Current state-of-the-art beam dynamics simulations include multiple physical effects and multiple physical length and/or time scales. We present recent developments in Synergia2, an accelerator modeling framework designed for multi-physics, multi-scale simulations. We summarize recent several recent results in multi-physics beam dynamics, including simulations of three Fermilab accelerators: the Tevatron, the Main Injector and the Debuncher. Early accelerator simulations focused on single-particle dynamics. To a first approximation, the forces on the particles in an accelerator beam are dominated by the external fields due to magnets, RF cavities, etc., so the single-particle dynamics are the leading physical effects. Detailed simulations of accelerators must include collective effects such as the space-charge repulsion of the beam particles, the effects of wake fields in the beam pipe walls and beam-beam interactions in colliders. These simulations require the sort of massively parallel computers that have only become available in recent times. We give an overview of the accelerator framework Synergia2, which was designed to take advantage of the capabilities of modern computational resources and enable simulations of multiple physical effects. We also summarize some recent results utilizing Synergia2 and BeamBeam3d, a tool specialized for beam-beam simulations.
The Integrated Plasma Simulator: A Flexible Python Framework for Coupled Multiphysics Simulation
Foley, Samantha S; Elwasif, Wael R; Bernholdt, David E
2011-11-01
High-fidelity coupled multiphysics simulations are an increasingly important aspect of computational science. In many domains, however, there has been very limited experience with simulations of this sort, therefore research in coupled multiphysics often requires computational frameworks with significant flexibility to respond to the changing directions of the physics and mathematics. This paper presents the Integrated Plasma Simulator (IPS), a framework designed for loosely coupled simulations of fusion plasmas. The IPS provides users with a simple component architecture into which a wide range of existing plasma physics codes can be inserted as components. Simulations can take advantage of multiple levels of parallelism supported in the IPS, and can be controlled by a high-level ``driver'' component, or by other coordination mechanisms, such as an asynchronous event service. We describe the requirements and design of the framework, and how they were implemented in the Python language. We also illustrate the flexibility of the framework by providing examples of different types of simulations that utilize various features of the IPS.
Advanced Multiphysics Thermal-Hydraulics Models for the High Flux Isotope Reactor
Jain, Prashant K; Freels, James D
2015-01-01
Engineering design studies to determine the feasibility of converting the High Flux Isotope Reactor (HFIR) from using highly enriched uranium (HEU) to low-enriched uranium (LEU) fuel are ongoing at Oak Ridge National Laboratory (ORNL). This work is part of an effort sponsored by the US Department of Energy (DOE) Reactor Conversion Program. HFIR is a very high flux pressurized light-water-cooled and moderated flux-trap type research reactor. HFIR s current missions are to support neutron scattering experiments, isotope production, and materials irradiation, including neutron activation analysis. Advanced three-dimensional multiphysics models of HFIR fuel were developed in COMSOL software for safety basis (worst case) operating conditions. Several types of physics including multilayer heat conduction, conjugate heat transfer, turbulent flows (RANS model) and structural mechanics were combined and solved for HFIR s inner and outer fuel elements. Alternate design features of the new LEU fuel were evaluated using these multiphysics models. This work led to a new, preliminary reference LEU design that combines a permanent absorber in the lower unfueled region of all of the fuel plates, a burnable absorber in the inner element side plates, and a relocated and reshaped (but still radially contoured) fuel zone. Preliminary results of estimated thermal safety margins are presented. Fuel design studies and model enhancement continue.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
A High Fidelity Multiphysics Framework for Modeling CRUD Deposition on PWR Fuel Rods
NASA Astrophysics Data System (ADS)
Walter, Daniel John
Corrosion products on the fuel cladding surfaces within pressurized water reactor fuel assemblies have had a significant impact on reactor operation. These types of deposits are referred to as CRUD and can lead to power shifts, as a consequence of the accumulation of solid boron phases on the fuel rod surfaces. Corrosion deposits can also lead to fuel failure resulting from localized corrosion, where the increased thermal resistance of the deposit leads to higher cladding temperatures. The prediction of these occurrences requires a comprehensive model of local thermal hydraulic and chemical processes occurring in close proximity to the cladding surface, as well as their driving factors. Such factors include the rod power distribution, coolant corrosion product concentration, as well as the feedbacks between heat transfer, fluid dynamics, chemistry, and neutronics. To correctly capture the coupled physics and corresponding feedbacks, a high fidelity framework is developed that predicts three-dimensional CRUD deposition on a rod-by-rod basis. Multiphysics boundary conditions resulting from the coupling of heat transfer, fluid dynamics, coolant chemistry, CRUD deposition, neutron transport, and nuclide transmutation inform the CRUD deposition solver. Through systematic parametric sensitivity studies of the CRUD property inputs, coupled boundary conditions, and multiphysics feedback mechanisms, the most important variables of multiphysics CRUD modeling are identified. Moreover, the modeling framework is challenged with a blind comparison of plant data to predictions by a simulation of a sub-assembly within the Seabrook nuclear plant that experienced CRUD induced fuel failures. The physics within the computational framework are loosely coupled via an operator-splitting technique. A control theory approach is adopted to determine the temporal discretization at which to execute a data transfer from one physics to another. The coupled stepsize selection is viewed as a
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
NASA Technical Reports Server (NTRS)
Kilbane, J.; Polzin, K. A.
2014-01-01
An annular linear induction pump (ALIP) that could be used for circulating liquid-metal coolant in a fission surface power reactor system is modeled in the present work using the computational COMSOL Multiphysics package. The pump is modeled using a two-dimensional, axisymmetric geometry and solved under conditions similar to those used during experimental pump testing. Real, nonlinear, temperature-dependent material properties can be incorporated into the model for both the electrically-conducting working fluid in the pump (NaK-78) and structural components of the pump. The intricate three-phase coil configuration of the pump is implemented in the model to produce an axially-traveling magnetic wave that is qualitatively similar to the measured magnetic wave. The model qualitatively captures the expected feature of a peak in efficiency as a function of flow rate.
Modelling in conventional electroporation for model cell with organelles using COMSOL Multiphysics
NASA Astrophysics Data System (ADS)
Sulaeman, M. Y.; Widita, R.
2016-03-01
Conventional electroporation is a formation of pores in the membrane cell due to the external electric field applied to the cell. The purpose of creating pores in the cell using conventional electroporation are to increase the effectiveness of chemotherapy (electrochemotherapy) and to kill cancer tissue using irreversible electroporation. Modeling of electroporation phenomenon on a model cell had been done by using software COMSOL Multiphysics 4.3b with the applied external electric field with intensity at 1.1 kV/cm to find transmembrane voltage and pore density. It can be concluded from the results of potential distribution and transmembrane voltage, it show that pores formation only occurs in the membrane cells and it could not penetrate into inside the model cell so there is not pores formation in its organells.
Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software
Tong, Charles; Chen, Xiao; Iaccarino, Gianluca; Mittal, Akshay
2013-10-08
In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.
Complimentary single technique and multi-physics modeling tools for NDE challenges
NASA Astrophysics Data System (ADS)
Le Lostec, Nechtan; Budyn, Nicolas; Sartre, Bernard; Glass, S. W.
2014-02-01
The challenges of modeling and simulation for Non Destructive Examination (NDE) research and development at AREVA NDE Solutions Technical Center (NETEC) are presented. In particular, the choice of a relevant software suite covering different applications and techniques and the process/scripting tools required for simulation and modeling are discussed. The software portfolio currently in use is then presented along with the limitations of the different software: CIVA for ultrasound (UT) methods, PZFlex for UT probes, Flux for eddy current (ET) probes and methods, plus Abaqus for multiphysics modeling. The finite element code, Abaqus is also considered as the future direction for many of our NDE modeling and simulation tasks. Some application examples are given on modeling of a piezoelectric acoustic phased array transducer and preliminary thermography configurations.
Multiphysics Model of Palladium Hydride Isotope Exchange Accounting for Higher Dimensionality
Gharagozloo, Patricia E.; Eliassi, Mehdi; Bon, Bradley Luis
2015-03-01
This report summarizes computational model developm ent and simulations results for a series of isotope exchange dynamics experiments i ncluding long and thin isothermal beds similar to the Foltz and Melius beds and a lar ger non-isothermal experiment on the NENG7 test bed. The multiphysics 2D axi-symmetr ic model simulates the temperature and pressure dependent exchange reactio n kinetics, pressure and isotope dependent stoichiometry, heat generation from the r eaction, reacting gas flow through porous media, and non-uniformities in the bed perme ability. The new model is now able to replicate the curved reaction front and asy mmetry of the exit gas mass fractions over time. The improved understanding of the exchange process and its dependence on the non-uniform bed properties and te mperatures in these larger systems is critical to the future design of such sy stems.
NASA Astrophysics Data System (ADS)
Lee, Yu-Ming; Lee, Shuo-Jen; Lee, Chi-Yuan; Chang, Dar-Yuan
In this study, the flow channels of a PEM fuel cell are fabricated by the EMM process. The parametric effects of the process are studied by both numerical simulation and experimental tests. For the numerical simulation, the multiphysics model, consisting of electrical field, convection, and diffusion phenomena is applied using COMSOL software. COMSOL software is used to predict the parametric effects of the channel fabrication accuracy such as pulse rate, pulse duty cycle, inter-electrode gap and electrolytic inflow velocity. The proper experimental parameters and the relationship between the parameters and the distribution of metal removal are established from the simulated results. The experimental fabrication tests showed that a shorter pulse rate and a higher pulse current improved the fabrication accuracy, and is consistent with the numerical simulation results. The proposed simulation model could be employed as a predictive tool to provide optimal parameters for better machining accuracy and process stability of the EMM process.
Multiphysics Simulations of Hot-Spot Initiation in Shocked Insensitive High-Explosive
NASA Astrophysics Data System (ADS)
Najjar, Fady; Howard, W. M.; Fried, L. E.
2010-11-01
Solid plastic-bonded high-explosive materials consist of crystals with micron-sized pores embedded. Under mechanical or thermal insults, these voids increase the ease of shock initiation by generating high-temperature regions during their collapse that might lead to ignition. Understanding the mechanisms of hot-spot initiation has significant research interest due to safety, reliability and development of new insensitive munitions. Multi-dimensional high-resolution meso-scale simulations are performed using the multiphysics software, ALE3D, to understand the hot-spot initiation. The Cheetah code is coupled to ALE3D, creating multi-dimensional sparse tables for the HE properties. The reaction rates were obtained from MD Quantum computations. Our current predictions showcase several interesting features regarding hot spot dynamics including the formation of a "secondary" jet. We will discuss the results obtained with hydro-thermo-chemical processes leading to ignition growth for various pore sizes and different shock pressures.
A novel approach to simulate Hodgkin-Huxley-like excitation with COMSOL Multiphysics.
Martinek, Johannes; Stickler, Yvonne; Reichel, Martin; Mayr, Winfried; Rattay, Frank
2008-08-01
A proof of concept for the evaluation of external nerve and muscle fiber excitation with the finite element software COMSOL Multiphysics, formerly known as FEMLAB, is presented. This software allows the simultaneous solution of fiber excitation by 1D models of the Hodgkin-Huxley type which are embedded in a volume conductor where the electric field is mainly dominated by the electrode currents. This way the presented bidomain model includes the interaction between electrode currents and transmembrane currents during the excitation process. Especially for direct muscle fiber stimulation (cardiac muscle, denervated muscle) the effects from secondary currents from large populations of excited fibers seem to be significant. The method has many applications, for example, the relation between stimulus parameters and fiber recruitment can be analyzed.
Mechanical behavior simulation of MEMS-based cantilever beam using COMSOL multiphysics
Acheli, A. Serhane, R.
2015-03-30
This paper presents the studies of mechanical behavior of MEMS cantilever beam made of poly-silicon material, using the coupling of three application modes (plane strain, electrostatics and the moving mesh) of COMSOL Multi-physics software. The cantilevers playing a key role in Micro Electro-Mechanical Systems (MEMS) devices (switches, resonators, etc) working under potential shock. This is why they require actuation under predetermined conditions, such as electrostatic force or inertial force. In this paper, we present mechanical behavior of a cantilever actuated by an electrostatic force. In addition to the simplification of calculations, the weight of the cantilever was not taken into account. Different parameters like beam displacement, electrostatics force and stress over the beam have been calculated by finite element method after having defining the geometry, the material of the cantilever model (fixed at one of ends but is free to move otherwise) and his operational space.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
Merzari, E.; Shemon, E. R.; Yu, Y. Q.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Tautges, Timothy; Solberg, Jerome; Ferencz, Robert Mark; Whitesides, R.
2015-12-21
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.
An approach for coupled-code multiphysics core simulations from a common input
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger P.; Clarno, Kevin T.; Simunovic, Srdjan; Slattery, Stuart R.; Turner, John A.; Palmtag, Scott
2014-12-10
This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.
NASA Astrophysics Data System (ADS)
Wilson, Cian R.; Spiegelman, Marc; van Keken, Peter E.
2017-02-01
We introduce and describe a new software infrastructure TerraFERMA, the Transparent Finite Element Rapid Model Assembler, for the rapid and reproducible description and solution of coupled multiphysics problems. The design of TerraFERMA is driven by two computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced, and modified in a manner such that the best ideas in computation and Earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high-level problem description (FEniCS), composable solvers for coupled multiphysics problems (PETSc), and an options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an interface that organizes the scientific and computational choices required in a model into a single options file from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible, while still permitting the individual researcher considerable latitude in model construction. TerraFERMA solves partial differential equations using the finite element method. It is particularly well suited for nonlinear problems with complex coupling between components. TerraFERMA is open-source and available at http://terraferma.github.io, which includes links to documentation and example input files.
An approach for coupled-code multiphysics core simulations from a common input
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...
2014-12-10
This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less
Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows
NASA Astrophysics Data System (ADS)
Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.
2014-12-01
For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.
NASA Astrophysics Data System (ADS)
Jokisaari, Andrea M.
Hydride precipitation in zirconium is a significant factor limiting the lifetime of nuclear fuel cladding, because hydride microstructures play a key role in the degradation of fuel cladding. However, the behavior of hydrogen in zirconium has typically been modeled using mean field approaches, which do not consider microstructural evolution. This thesis describes a quantitative microstructural evolution model for the alpha-zirconium/delta-hydride system and the associated numerical methods and algorithms that were developed. The multiphysics, phase field-based model incorporates CALPHAD free energy descriptions, linear elastic solid mechanics, and classical nucleation theory. A flexible simulation software implementing the model, Hyrax, is built on the Multiphysics Object Oriented Simulation Environment (MOOSE) finite element framework. Hyrax is open-source and freely available; moreover, the numerical methods and algorithms that have been developed are generalizable to other systems. The algorithms are described in detail, and verification studies for each are discussed. In addition, analyses of the sensitivity of the simulation results to the choice of numerical parameters are presented. For example, threshold values for the CALPHAD free energy algorithm and the use of mesh and time adaptivity when employing the nucleation algorithm are studied. Furthermore, preliminary insights into the nucleation behavior of delta-hydrides are described. These include a) the sensitivities of the nucleation rate to temperature, interfacial energy, composition and elastic energy, b) the spatial variation of the nucleation rate around a single precipitate, and c) the effect of interfacial energy and nucleation rate on the precipitate microstructure. Finally, several avenues for future work are discussed. Topics encompass the terminal solid solubility hysteresis of hydrogen in zirconium and the effects of the alpha/delta interfacial energy, as well as thermodiffusion, plasticity
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Accurate spectral color measurements
NASA Astrophysics Data System (ADS)
Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.
1999-08-01
Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.
NASA Astrophysics Data System (ADS)
Morsali, Seyedreza; Daryadel, Soheil; Zhou, Zhong; Behroozfar, Ali; Qian, Dong; Minary-Jolandan, Majid
2017-01-01
Capability to print metals at micro/nanoscale in arbitrary 3D patterns at local points of interest will have applications in nano-electronics and sensors. Meniscus-confined electrodeposition (MCED) is a manufacturing process that enables depositing metals from an electrolyte containing nozzle (pipette) in arbitrary 3D patterns. In this process, a meniscus (liquid bridge or capillary) between the pipette tip and the substrate governs the localized electrodeposition process. Fabrication of metallic microstructures using this process is a multi-physics process in which electrodeposition, fluid dynamics, and mass and heat transfer physics are simultaneously involved. We utilized multi-physics finite element simulation, guided by experimental data, to understand the effect of water evaporation from the liquid meniscus at the tip of the nozzle for deposition of free-standing copper microwires in MCED process.
Anh Bui; Nam Dinh; Brian Williams
2013-09-01
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Such sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable
NASA Astrophysics Data System (ADS)
Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.
2014-12-01
The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.
2013-05-23
Multiphysics Modeling and Simulations of Mil A46100 Armor-Grade Martensitic Steel Gas Metal Arc Welding Process M. Grujicic, S. Ramaswami, J.S...hardness armor martensitic steel . The model consists of five distinct modules, each covering a specific aspect of the GMAW process, i.e., (a) dynamics...FZ, and the adjacent heat-affected zone, HAZ) of a prototypical high-hardness armor-grade martensitic steel MIL A46100 (Ref 1). It is hoped that the
Becker, R; McElfresh, M; Lee, C; Balhorn, R; White, D
2003-12-01
In this white paper, a road map is presented to establish a multiphysics simulation capability for the design and optimization of sensor systems that incorporate nanomaterials and technologies. The Engineering Directorate's solid/fluid mechanics and electromagnetic computer codes will play an important role in both multiscale modeling and integration of required physics issues to achieve a baseline simulation capability. Molecular dynamic simulations performed primarily in the BBRP, CMS and PAT directorates, will provide information for the construction of multiscale models. All of the theoretical developments will require closely coupled experimental work to develop material models and validate simulations. The plan is synergistic and complimentary with the Laboratory's emerging core competency of multiscale modeling. The first application of the multiphysics computer code is the simulation of a ''simple'' biological system (protein recognition utilizing synthesized ligands) that has a broad range of applications including detection of biological threats, presymptomatic detection of illnesses, and drug therapy. While the overall goal is to establish a simulation capability, the near-term work is mainly focused on (1) multiscale modeling, i.e., the development of ''continuum'' representations of nanostructures based on information from molecular dynamics simulations and (2) experiments for model development and validation. A list of LDRDER proposals and ongoing projects that could be coordinated to achieve these near-term objectives and demonstrate the feasibility and utility of a multiphysics simulation capability is given.
Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems
NASA Technical Reports Server (NTRS)
Geng, Steven M.; Reid, Terry V.
2016-01-01
One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.
Data-driven prognosis: a multi-physics approach verified via balloon burst experiment
Chandra, Abhijit; Kar, Oliva
2015-01-01
A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or ‘training’ for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons. PMID:27547071
A two-phase multi-physics model for simulating plasma discharge in liquids
NASA Astrophysics Data System (ADS)
Charchi, Ali; Farouk, Tanvir
2014-10-01
Plasma discharge in liquids has been a topic of interest in recent years both in terms of fundamental science as well as practical applications. Even though there has been a large amount of experimental work reported in the literature, modeling and simulation studies on plasma discharges in liquids is limited. To obtain a more detailed model for plasma discharge in liquid phase a two-phase multiphysics model has been developed. The model resolves both the liquid and gas phase and solves the mass and momentum conservation of the averaged species in both the phases. The fluid motion equation considers surface tension, electric field force as well as gravitational force. To calculate the electric force, the charge conservation equations for positive and negative ions and also for the electrons are solved. The Possion's equation is solved in each time step for obtaining a self consistent electric field. The obtained electric field and charge distribution is used to calculate the electric body force exerted on the fluid. Simulation show that the coupled effect of plasma, surface and gravity results in a time-evolving bubble shape. The influence of different plasma parameters on the bubble dynamics is studied.
Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei
2013-01-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200
Data-driven prognosis: a multi-physics approach verified via balloon burst experiment.
Chandra, Abhijit; Kar, Oliva
2015-04-08
A multi-physics formulation for data-driven prognosis (DDP) is developed. Unlike traditional predictive strategies that require controlled offline measurements or 'training' for determination of constitutive parameters to derive the transitional statistics, the proposed DDP algorithm relies solely on in situ measurements. It uses a deterministic mechanics framework, but the stochastic nature of the solution arises naturally from the underlying assumptions regarding the order of the conservation potential as well as the number of dimensions involved. The proposed DDP scheme is capable of predicting onset of instabilities. Because the need for offline testing (or training) is obviated, it can be easily implemented for systems where such a priori testing is difficult or even impossible to conduct. The prognosis capability is demonstrated here via a balloon burst experiment where the instability is predicted using only online visual observations. The DDP scheme never failed to predict the incipient failure, and no false-positives were issued. The DDP algorithm is applicable to other types of datasets. Time horizons of DDP predictions can be adjusted by using memory over different time windows. Thus, a big dataset can be parsed in time to make a range of predictions over varying time horizons.
A multiphysics and multiscale model for low frequency electromagnetic direct-chill casting
NASA Astrophysics Data System (ADS)
Košnik, N.; Guštin, A. Z.; Mavrič, B.; Šarler, B.
2016-03-01
Simulation and control of macrosegregation, deformation and grain size in low frequency electromagnetic (EM) direct-chill casting (LFEMC) is important for downstream processing. Respectively, a multiphysics and multiscale model is developed for solution of Lorentz force, temperature, velocity, concentration, deformation and grain structure of LFEMC processed aluminum alloys, with focus on axisymmetric billets. The mixture equations with lever rule, linearized phase diagram, and stationary thermoelastic solid phase are assumed, together with EM induction equation for the field imposed by the coil. Explicit diffuse approximate meshless solution procedure [1] is used for solving the EM field, and the explicit local radial basis function collocation method [2] is used for solving the coupled transport phenomena and thermomechanics fields. Pressure-velocity coupling is performed by the fractional step method [3]. The point automata method with modified KGT model is used to estimate the grain structure [4] in a post-processing mode. Thermal, mechanical, EM and grain structure outcomes of the model are demonstrated. A systematic study of the complicated influences of the process parameters can be investigated by the model, including intensity and frequency of the electromagnetic field. The meshless solution framework, with the implemented simplest physical models, will be further extended by including more sophisticated microsegregation and grain structure models, as well as a more realistic solid and solid-liquid phase rheology.
A self-taught artificial agent for multi-physics computational model personalization.
Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin
2016-12-01
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model.
NASA Astrophysics Data System (ADS)
Varghese, Julian
This research work has contributed in various ways to help develop a better understanding of textile composites and materials with complex microstructures in general. An instrumental part of this work was the development of an object-oriented framework that made it convenient to perform multiscale/multiphysics analyses of advanced materials with complex microstructures such as textile composites. In addition to the studies conducted in this work, this framework lays the groundwork for continued research of these materials. This framework enabled a detailed multiscale stress analysis of a woven DCB specimen that revealed the effect of the complex microstructure on the stress and strain energy release rate distribution along the crack front. In addition to implementing an oxidation model, the framework was also used to implement strategies that expedited the simulation of oxidation in textile composites so that it would take only a few hours. The simulation showed that the tow architecture played a significant role in the oxidation behavior in textile composites. Finally, a coupled diffusion/oxidation and damage progression analysis was implemented that was used to study the mechanical behavior of textile composites under mechanical loading as well as oxidation. A parametric study was performed to determine the effect of material properties and the number of plies in the laminate on its mechanical behavior. The analyses indicated a significant effect of the tow architecture and other parameters on the damage progression in the laminates.
Multiphysics modeling of two-phase film boiling within porous corrosion deposits
NASA Astrophysics Data System (ADS)
Jin, Miaomiao; Short, Michael
2016-07-01
Porous corrosion deposits on nuclear fuel cladding, known as CRUD, can cause multiple operational problems in light water reactors (LWRs). CRUD can cause accelerated corrosion of the fuel cladding, increase radiation fields and hence greater exposure risk to plant workers once activated, and induce a downward axial power shift causing an imbalance in core power distribution. In order to facilitate a better understanding of CRUD's effects, such as localized high cladding surface temperatures related to accelerated corrosion rates, we describe an improved, fully-coupled, multiphysics model to simulate heat transfer, chemical reactions and transport, and two-phase fluid flow within these deposits. Our new model features a reformed assumption of 2D, two-phase film boiling within the CRUD, correcting earlier models' assumptions of single-phase coolant flow with wick boiling under high heat fluxes. This model helps to better explain observed experimental values of the effective CRUD thermal conductivity. Finally, we propose a more complete set of boiling regimes, or a more detailed mechanism, to explain recent CRUD deposition experiments by suggesting the new concept of double dryout specifically in thick porous media with boiling chimneys.
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
NASA Astrophysics Data System (ADS)
Hagmeyer, Britta; Schütte, Julia; Böttger, Jan; Gebhardt, Rolf; Stelzle, Martin
2013-03-01
Replacing animal testing with in vitro cocultures of human cells is a long-term goal in pre-clinical drug tests used to gain reliable insight into drug-induced cell toxicity. However, current state-of-the-art 2D or 3D cell cultures aiming at mimicking human organs in vitro still lack organ-like morphology and perfusion and thus organ-like functions. To this end, microfluidic systems enable construction of cell culture devices which can be designed to more closely resemble the smallest functional unit of organs. Multiphysics simulations represent a powerful tool to study the various relevant physical phenomena and their impact on functionality inside microfluidic structures. This is particularly useful as it allows for assessment of system functions already during the design stage prior to actual chip fabrication. In the HepaChip®, dielectrophoretic forces are used to assemble human hepatocytes and human endothelial cells in liver sinusoid-like structures. Numerical simulations of flow distribution, shear stress, electrical fields and heat dissipation inside the cell assembly chambers as well as surface wetting and surface tension effects during filling of the microchannel network supported the design of this human-liver-on-chip microfluidic system for cell culture applications. Based on the device design resulting thereof, a prototype chip was injection-moulded in COP (cyclic olefin polymer). Functional hepatocyte and endothelial cell cocultures were established inside the HepaChip® showing excellent metabolic and secretory performance.
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
Multiphysics simulation of a microfluidic perfusion chamber for brain slice physiology.
Caicedo, Hector H; Hernandez, Maximiliano; Fall, Christopher P; Eddington, David T
2010-10-01
Understanding and optimizing fluid flows through in vitro microfluidic perfusion systems is essential in mimicking in vivo conditions for biological research. In a previous study a microfluidic brain slice device (microBSD) was developed for microscale electrophysiology investigations. The device consisted of a standard perfusion chamber bonded to a polydimethylsiloxane (PDMS) microchannel substrate. Our objective in this study is to characterize the flows through the microBSD by using multiphysics simulations of injections into a pourous matrix to identify optimal spacing of ports. Three-dimensional computational fluid dynamic (CFD) simulations are performed with CFD-ACE + software to model, simulate, and assess the transport of soluble factors through the perfusion bath, the microchannels, and a material that mimics the porosity, permeability and tortuosity of brain tissue. Additionally, experimental soluble factor transport through a brain slice is predicted by and compared to simulated fluid flow in a volume that represents a porous matrix material. The computational results are validated with fluorescent dye experiments.
Muley, Pranjali D; Boldor, Dorin
2012-01-01
Use of advanced microwave technology for biodiesel production from vegetable oil is a relatively new technology. Microwave dielectric heating increases the process efficiency and reduces reaction time. Microwave heating depends on various factors such as material properties (dielectric and thermo-physical), frequency of operation and system design. Although lab scale results are promising, it is important to study these parameters and optimize the process before scaling up. Numerical modeling approach can be applied for predicting heating and temperature profiles including at larger scale. The process can be studied for optimization without actually performing the experiments, reducing the amount of experimental work required. A basic numerical model of continuous electromagnetic heating of biodiesel precursors was developed. A finite element model was built using COMSOL Multiphysics 4.2 software by coupling the electromagnetic problem with the fluid flow and heat transfer problem. Chemical reaction was not taken into account. Material dielectric properties were obtained experimentally, while the thermal properties were obtained from the literature (all the properties were temperature dependent). The model was tested for the two different power levels 4000 W and 4700 W at a constant flow rate of 840ml/min. The electric field, electromagnetic power density flow and temperature profiles were studied. Resulting temperature profiles were validated by comparing to the temperatures obtained at specific locations from the experiment. The results obtained were in good agreement with the experimental data.
A liquid metal-based structurally embedded vascular antenna: I. Concept and multiphysical modeling
NASA Astrophysics Data System (ADS)
Hartl, D. J.; Frank, G. J.; Huff, G. H.; Baur, J. W.
2017-02-01
This work proposes a new concept for a reconfigurable structurally embedded vascular antenna (SEVA). The work builds on ongoing research of structurally embedded microvascular systems in laminated structures for thermal transport and self-healing and on studies of non-toxic liquid metals for reconfigurable electronics. In the example design, liquid metal-filled channels in a laminated composite act as radiating elements for a high-power planar zig-zag wire log periodic dipole antenna. Flow of liquid metal through the channels is used to limit the temperature of the composite in which the antenna is embedded. A multiphysics engineering model of the transmitting antenna is formulated that couples the electromagnetic, fluid, thermal, and mechanical responses. In part 1 of this two-part work, it is shown that the liquid metal antenna is highly reconfigurable in terms of its electromagnetic response and that dissipated thermal energy generated during high power operation can be offset by the action of circulating or cyclically replacing the liquid metal such that heat is continuously removed from the system. In fact, the SEVA can potentially outperform traditional copper-based antennas in high-power operational configurations. The coupled engineering model is implemented in an automated framework and a design of experiment study is performed to quantify first-order design trade-offs in this multifunctional structure. More rigorous design optimization is addressed in part 2.
The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer
Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.
2013-07-01
The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)
An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations
Michael R Tonks; Derek R Gaston; Paul C Millett; David Andrs; Paul Talbot
2012-01-01
The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of new models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.
A multi-physical model for charge and mass transport in a flexible ionic polymer sensor
NASA Astrophysics Data System (ADS)
Zhu, Zicai; Asaka, Kinji; Takagi, Kentaro; Aabloo, Alvo; Horiuchi, Tetsuya
2016-04-01
An ionic polymer material can generate electrical potential and function as a bio-sensor under a non-uniform deformation. Ionic polymer-metal composite (IPMC) is a typical flexible ionic polymer sensor material. A multi-physical sensing model is presented at first based on the same physical equations in the physical model for IPMC actuator we obtained before. Under an applied bending deformation, water and cation migrate to the direction of outside electrode immediately. Redistribution of cations causes an electrical potential difference between two electrodes. The cation migration is strongly restrained by the generated electrical potential. And the migrated cations will move back to the inner electrode under the concentration diffusion effect and lead to a relaxation of electrical potential. In the whole sensing process, transport and redistribution of charge and mass are revealed along the thickness direction by numerical analysis. The sensing process is a revised physical process of the actuation, however, the transport properties are quite different from those of the later. And the effective dielectric constant of IPMC, which is related to the morphology of the electrode-ionic polymer interface, is proved to have little relation with the sensing amplitude. All the conclusions are significant for ionic polymer sensing material design.
A multiphysics microstructure-resolved model for silicon anode lithium-ion batteries
NASA Astrophysics Data System (ADS)
Wang, Miao; Xiao, Xinran; Huang, Xiaosong
2017-04-01
Silicon (Si) is one of the most promising next generation anode materials for lithium-ion batteries (LIBs), but the use of Si in LIBs has been rather limited. The main challenge is its large volume change (up to 300%) during battery cycling. This can lead to the fracture of Si, failure at the interfaces between electrode components, and large dimensional change on the cell level. To optimize the Si electrode/battery design, a model that considers the interactions of different cell components is needed. This paper presents the development of a multiphysics microstructure-resolved model (MRM) for LIB cells with a-Si anode. The model considered the electrochemical reactions, Li transports in electrolyte and electrodes, dimensional changes and stresses, property evolution with the structure, and the coupling relationships. Important model parameters, such as the diffusivity, reaction rate constant, and apparent transfer coefficient, were determined by correlating the simulation results to experiments. The model was validated with experimental results in the literature. The use of this model was demonstrated in a parameter study of Si nanowall|Li cells. The specific and volumetric capacities of the cell as a function of the size, length/size ratio, spacing of the nanostructure, and Li+ concentration in electrolyte were investigated.
Partitioned coupling strategies for multi-physically coupled radiative heat transfer problems
Wendt, Gunnar; Erbts, Patrick Düster, Alexander
2015-11-01
This article aims to propose new aspects concerning a partitioned solution strategy for multi-physically coupled fields including the physics of thermal radiation. Particularly, we focus on the partitioned treatment of electro–thermo-mechanical problems with an additional fourth thermal radiation field. One of the main goals is to take advantage of the flexibility of the partitioned approach to enable combinations of different simulation software and solvers. Within the frame of this article, we limit ourselves to the case of nonlinear thermoelasticity at finite strains, using temperature-dependent material parameters. For the thermal radiation field, diffuse radiating surfaces and gray participating media are assumed. Moreover, we present a robust and fast partitioned coupling strategy for the fourth field problem. Stability and efficiency of the implicit coupling algorithm are improved drawing on several methods to stabilize and to accelerate the convergence. To conclude and to review the effectiveness and the advantages of the additional thermal radiation field several numerical examples are considered to study the proposed algorithm. In particular we focus on an industrial application, namely the electro–thermo-mechanical modeling of the field-assisted sintering technology.
DAG Software Architectures for Multi-Scale Multi-Physics Problems at Petascale and Beyond
NASA Astrophysics Data System (ADS)
Berzins, Martin
2015-03-01
The challenge of computations at Petascale and beyond is to ensure how to make possible efficient calculations on possibly hundreds of thousands for cores or on large numbers of GPUs or Intel Xeon Phis. An important methodology for achieving this is at present thought to be that of asynchronous task-based parallelism. The success of this approach will be demonstrated using the Uintah software framework for the solution of coupled fluid-structure interaction problems with chemical reactions. The layered approach of this software makes it possible for the user to specify the physical problems without parallel code, for that specification to be translated into a parallel set of tasks. These tasks are executed using a runtime system that executes tasks asynchronously and sometimes out-of-order. The scalability and portability of this approach will be demonstrated using examples from large scale combustion problems, industrial detonations and multi-scale, multi-physics models. The challenges of scaling such calculations to the next generations of leadership class computers (with more than a hundred petaflops) will be discussed. Thanks to NSF, XSEDE, DOE NNSA, DOE NETL, DOE ALCC and DOE INCITE.
Fully-Implicit Reconstructed Discontinuous Galerkin Method for Stiff Multiphysics Problems
NASA Astrophysics Data System (ADS)
Nourgaliev, Robert
2015-11-01
A new reconstructed Discontinuous Galerkin (rDG) method, based on orthogonal basis/test functions, is developed for fluid flows on unstructured meshes. Orthogonality of basis functions is essential for enabling robust and efficient fully-implicit Newton-Krylov based time integration. The method is designed for generic partial differential equations, including transient, hyperbolic, parabolic or elliptic operators, which are attributed to many multiphysics problems. We demonstrate the method's capabilities for solving compressible fluid-solid systems (in the low Mach number limit), with phase change (melting/solidification), as motivated by applications in Additive Manufacturing. We focus on the method's accuracy (in both space and time), as well as robustness and solvability of the system of linear equations involved in the linearization steps of Newton-based methods. The performance of the developed method is investigated for highly-stiff problems with melting/solidification, emphasizing the advantages from tight coupling of mass, momentum and energy conservation equations, as well as orthogonality of basis functions, which leads to better conditioning of the underlying (approximate) Jacobian matrices, and rapid convergence of the Krylov-based linear solver. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and funded by the LDRD at LLNL under project tracking code 13-SI-002.
NASA Astrophysics Data System (ADS)
Gallien, B.; Albaric, M.; Duffar, T.; Kakimoto, K.; M'Hamdi, M.
2017-01-01
Elaboration of silicon ingots for photovoltaic application in Directional Solidification furnace leads to formation of dislocations mainly due to thermoelastic stresses, which impact photovoltaic conversion rate. Several research teams have created numerical simulation models using home-made software in order to study dislocation multiplication and predict the dislocation density and residual stresses inside ingots after elaboration. In this study, the commercial software Comsol-Multiphysics® is used to calculate the evolution of dislocation density during the ingot solidification and cooling. Thermo-elastic stress, due to temperature field inside the ingot during elaboration, is linked to the evolution of the dislocation density by the Alexander and Haasen model (A&H model). The purpose of this study is to show relevance of commercial software to predict dislocation density in ingots. In a first approach, A&H physical model is introduced for a 2D axisymmetric geometry. After a short introduction, modification of Comsol® software is presented in order to include A&H equations. This numerical model calculates dislocation density and plastic stress continuously during ingot solidification and cooling. Results of this model are then compared to home-made simulation created by the teams at Kyushu university and NTNU. Results are also compared to characterization of a silicon ingot elaborated in a gradient freeze furnace. Both of these comparisons shows the relevance of using a commercial code, as Comsol®, to predict dislocations multiplication in a silicon ingot during elaboration.
Multiscale Multiphysics Caprock Seal Analysis: A Case Study of the Farnsworth Unit, Texas, USA
NASA Astrophysics Data System (ADS)
Heath, J. E.; Dewers, T. A.; Mozley, P.
2015-12-01
Caprock sealing behavior depends on coupled processes that operate over a variety of length and time scales. Capillary sealing behavior depends on nanoscale pore throats and interfacial fluid properties. Larger-scale sedimentary architecture, fractures, and faults may govern properties of potential "seal-bypass" systems. We present the multiscale multiphysics investigation of sealing integrity of the caprock system that overlies the Morrow Sandstone reservoir, Farnsworth Unit, Texas. The Morrow Sandstone is the target injection unit for an on-going combined enhanced oil recovery-CO2 storage project by the Southwest Regional Partnership on Carbon Sequestration (SWP). Methods include small-to-large scale measurement techniques, including: focused ion beam-scanning electron microscopy; laser scanning confocal microscopy; electron and optical petrography; core examinations of sedimentary architecture and fractures; geomechanical testing; and a noble gas profile through sealing lithologies into the reservoir, as preserved from fresh core. The combined data set is used as part of a performance assessment methodology. The authors gratefully acknowledge the U.S. Department of Energy's (DOE) National Energy Technology Laboratory for sponsoring this project through the SWP under Award No. DE-FC26-05NT42591. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
A novel medical image data-based multi-physics simulation platform for computational life sciences
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-01-01
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models. PMID:24427518
NASA Astrophysics Data System (ADS)
Yang, Xiaobin; Li, Xiuhong; He, Yafeng; Wang, Xiaojun; Xu, Bo
2017-04-01
A multiphysics model for the numerical computation of stresses, trapped field and temperature distribution of a infinite long superconducting cylinder is proposed, based on which the stresses, including the thermal stresses and mechanical stresses due to Lorentz force, and trapped fields in the superconductor subjected to pulsed magnetic fields are analyzed. By comparing the results under pulsed magnetic fields with different pulse durations, it is found that the both the mechanical stress due to the electromagnetic force and the thermal stress due to temperature gradient contribute to the total stress level in the superconductor. For pulsed magnetic field with short durations, the thermal stress is the dominant contribution to the total stress, because the heat generated by AC-loss builds up significant temperature gradient in such short durations. However, for a pulsed field with a long duration the gradient of temperature and flux, as well as the maximal tensile stress, are much smaller. And the results of this paper is meaningful for the design and manufacture of superconducting permanent magnets.
NASA Astrophysics Data System (ADS)
Ma, Z.; Hou, Z.; Zang, X.
2015-09-01
As a large-scale flexible inflatable structure by a huge inner lifting gas volume of several hundred thousand cubic meters, the stratospheric airship's thermal characteristic of inner gas plays an important role in its structural performance. During the floating flight, the day-night variation of the combined thermal condition leads to the fluctuation of the flow field inside the airship, which will remarkably affect the pressure acted on the skin and the structural safety of the stratospheric airship. According to the multi-physics coupling mechanism mentioned above, a numerical procedure of structural safety analysis of stratospheric airships is developed and the thermal model, CFD model, finite element code and criterion of structural strength are integrated. Based on the computation models, the distributions of the deformations and stresses of the skin are calculated with the variation of day-night time. The effects of loads conditions and structural configurations on the structural safety of stratospheric airships in the floating condition are evaluated. The numerical results can be referenced for the structural design of stratospheric airships.
Development of high-fidelity multiphysics system for light water reactor analysis
NASA Astrophysics Data System (ADS)
Magedanz, Jeffrey W.
There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining
Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow.
Piebalgs, Andris; Xu, X Yun
2015-12-06
Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction-diffusion-convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation.
Multiphysics modeling of two-phase film boiling within porous corrosion deposits
Jin, Miaomiao Short, Michael
2016-07-01
Porous corrosion deposits on nuclear fuel cladding, known as CRUD, can cause multiple operational problems in light water reactors (LWRs). CRUD can cause accelerated corrosion of the fuel cladding, increase radiation fields and hence greater exposure risk to plant workers once activated, and induce a downward axial power shift causing an imbalance in core power distribution. In order to facilitate a better understanding of CRUD's effects, such as localized high cladding surface temperatures related to accelerated corrosion rates, we describe an improved, fully-coupled, multiphysics model to simulate heat transfer, chemical reactions and transport, and two-phase fluid flow within these deposits. Our new model features a reformed assumption of 2D, two-phase film boiling within the CRUD, correcting earlier models' assumptions of single-phase coolant flow with wick boiling under high heat fluxes. This model helps to better explain observed experimental values of the effective CRUD thermal conductivity. Finally, we propose a more complete set of boiling regimes, or a more detailed mechanism, to explain recent CRUD deposition experiments by suggesting the new concept of double dryout specifically in thick porous media with boiling chimneys. - Highlights: • A two-phase model of CRUD's effects on fuel cladding is developed and improved. • This model eliminates the formerly erroneous assumption of wick boiling. • Higher fuel cladding temperatures are predicted when accounting for two-phase flow. • Double-peaks in thermal conductivity vs. heat flux in experiments are explained. • A “double dryout” mechanism in CRUD is proposed based on the model and experiments.
Multiphysical Modeling of Transport Phenomena During Laser Welding of Dissimilar Steels
NASA Astrophysics Data System (ADS)
Métais, A.; Matteï, S.; Tomashchuk, I.; Gaied, S.
The success of new high-strength steels allows attaining equivalent performances with lower thicknesses and significant weight reduction. The welding of new couples of steel grades requires development and control of joining processes. Thanks to high precision and good flexibility, laser welding became one of the most used processes for joining of dissimilar welded blanks. The prediction of the local chemical composition in the weld formed between dissimilar steels in function of the welding parameters is essential because the dilution rate and the distribution of alloying elements in the melted zone determines the final tensile strength of the weld. The goal of the present study is to create and to validate a multiphysical numerical model studying the mixing of dissimilar steels in laser weld pool. A 3D modelling of heat transfer, turbulent flow and transport of species provides a better understanding of diffusion and convective mixing in laser weld pool. The present model allows predicting the weld geometry and element distribution. The model has been developed based on steady keyhole approximation and solved in quasi-stationary form in order to reduce the computation time. Turbulent flow formulation was applied to calculate velocity field. Fick law for diluted species was used to simulate the transport of alloying elements in the weld pool. To validate the model, a number of experiments have been performed: tests using pure 100 μm thick Ni foils like tracer and weld between a rich and poor manganese steels. SEM-EDX analysis of chemical composition has been carried out to obtain quantitative mapping of Ni and Mn distributions in the melted zone. The results of simulations have been found in good agreement with experimental data.
NASA Astrophysics Data System (ADS)
Prabhakar, Sanjay; Melnik, Roderick; Bonilla, Luis L.
2013-06-01
The new contribution of this paper is to develop a cylindrical representation of an already known multiphysics model for embedded nanowire superlattices (NWSLs) of wurtzite structure that includes a coupled, strain dependent 8-band k .p Hamiltonian in cylindrical coordinates and investigate the influence of coupled piezo-electromechanical effects on the barrier localization and critical radius in such NWSLs. The coupled piezo-electromechanical model for semiconductor materials takes into account the strain, piezoelectric effects, and spontaneous polarization. Based on the developed 3D model, the band structures of electrons (holes) obtained from results of modeling in Cartesian coordinates are in good agreement with those values obtained from our earlier developed 2D model in cylindrical coordinates. Several parameters such as lattice mismatch, piezo-electric fields, valence, and conduction band offsets at the heterojunction of AlxGa1-xN/GaN superlattice can be varied as a function of the Al mole fraction. When the band offsets at the heterojunction of AlxGa1-xN/GaN are very small and the influence of the piezo-electromechanical effects can be minimized, then the barrier material can no longer be treated as an infinite potential well. In this situation, it is possible to visualize the penetration of the Bloch wave function into the barrier material that provides an estimation of critical radii of NWSLs. In this case, the NWSLs can act as inversion layers. Finally, we investigate the influence of symmetry of the square and cylindrical NWSLs on the band structures of electrons in the conduction band.
NASA Astrophysics Data System (ADS)
Slattery, Stuart R.
2016-02-01
In this paper we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. These scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Towards a multi-physics modelling framework for thrombolysis under the influence of blood flow
Piebalgs, Andris
2015-01-01
Thrombolytic therapy is an effective means of treating thromboembolic diseases but can also give rise to life-threatening side effects. The infusion of a high drug concentration can provoke internal bleeding while an insufficient dose can lead to artery reocclusion. It is hoped that mathematical modelling of the process of clot lysis can lead to a better understanding and improvement of thrombolytic therapy. To this end, a multi-physics continuum model has been developed to simulate the dissolution of clot over time upon the addition of tissue plasminogen activator (tPA). The transport of tPA and other lytic proteins is modelled by a set of reaction–diffusion–convection equations, while blood flow is described by volume-averaged continuity and momentum equations. The clot is modelled as a fibrous porous medium with its properties being determined as a function of the fibrin fibre radius and voidage of the clot. A unique feature of the model is that it is capable of simulating the entire lytic process from the initial phase of lysis of an occlusive thrombus (diffusion-limited transport), the process of recanalization, to post-canalization thrombolysis under the influence of convective blood flow. The model has been used to examine the dissolution of a fully occluding clot in a simplified artery at different pressure drops. Our predicted lytic front velocities during the initial stage of lysis agree well with experimental and computational results reported by others. Following canalization, clot lysis patterns are strongly influenced by local flow patterns, which are symmetric at low pressure drops, but asymmetric at higher pressure drops, which give rise to larger recirculation regions and extended areas of intense drug accumulation. PMID:26655469
Multiphysics Modeling of Microwave Heating of a Frozen Heterogeneous Meal Rotating on a Turntable.
Pitchai, Krishnamoorthy; Chen, Jiajia; Birla, Sohan; Jones, David; Gonzalez, Ric; Subbiah, Jeyamkondan
2015-12-01
A 3-dimensional (3-D) multiphysics model was developed to understand the microwave heating process of a real heterogeneous food, multilayered frozen lasagna. Near-perfect 3-D geometries of food package and microwave oven were used. A multiphase porous media model combining the electromagnetic heat source with heat and mass transfer, and incorporating phase change of melting and evaporation was included in finite element model. Discrete rotation of food on the turntable was incorporated. The model simulated for 6 min of microwave cooking of a 450 g frozen lasagna kept at the center of the rotating turntable in a 1200 W domestic oven. Temperature-dependent dielectric and thermal properties of lasagna ingredients were measured and provided as inputs to the model. Simulated temperature profiles were compared with experimental temperature profiles obtained using a thermal imaging camera and fiber-optic sensors. The total moisture loss in lasagna was predicted and compared with the experimental moisture loss during cooking. The simulated spatial temperature patterns predicted at the top layer was in good agreement with the corresponding patterns observed in thermal images. Predicted point temperature profiles at 6 different locations within the meal were compared with experimental temperature profiles and root mean square error (RMSE) values ranged from 6.6 to 20.0 °C. The predicted total moisture loss matched well with an RMSE value of 0.54 g. Different layers of food components showed considerably different heating performance. Food product developers can use this model for designing food products by understanding the effect of thickness and order of each layer, and material properties of each layer, and packaging shape on cooking performance.
Jahan, Sharmin; Zhang, Qiang; Pratush, Amit; Xie, Haiyang; Xiao, Hua; Fan, Liuyin; Cao, Chengxi
2016-11-01
Presented herein is a novel headspace single drop microextraction (HS-SDME) based on temperature gradient (TG) for an on-site preconcentration technique of volatile and semivolatile samples. First, an inner vial cap was designed as a cooling device for acceptor droplet in HS-SDME unit to achieve fast and efficient microextraction. Second, for the first time, an in-vial TG was generated between the donor phase in a sample vial at 80 °C and the acceptor droplet under the inner vial cap containing cooling liquid at -20 °C for a TG-HS-SDME. Third, a simple mathematic model and numerical simulations were developed by using heat transfer in fluids, Navier-Stokes and mass balance equations for conditional optimization, and dynamic illumination of the proposed extraction based on COMSOL Multiphysics. Five chlorophenols (CPs) were selected as model analytes to authenticate the proposed method. The comparisons revealed that the simulative results were in good agreement with the quantitative experiments, verifying the design of TG-HS-SDME via the numerical simulation. Under the optimum conditions, the extraction enrichments were improved from 302- to 388-fold within 2 min only, providing 3.5 to 4 times higher enrichment factors as compared to a typical HS-SDME. The simulation indicated that these improvements in the extraction kinetics could be attributed due to the applied temperature gap between the sample matrix and acceptor droplet within the small volume of headspace. Additionally, the experiments demonstrated a good linearity (0.03-100 μg/L, R(2) > 0.9986), low limit of detection (7-10 ng/L), and fair repeatability (<5.9% RSD, n = 6). All of the simulative and experimental results indicated the robustness, precision, and usefulness of TG-HS-SDME for trace analyses of analytes in a wide variety of environmental, pharmaceutical, food safety, and forensic samples.
A Multiphysics Framework to Learn and Predict in Presence of Multiple Scales
NASA Astrophysics Data System (ADS)
Tomin, P.; Lunati, I.
2015-12-01
Modeling complex phenomena in the subsurface remains challenging due to the presence of multiple interacting scales, which can make it impossible to focus on purely macroscopic phenomena (relevant in most applications) and neglect the processes at the micro-scale. We present and discuss a general framework that allows us to deal with the situation in which the lack of scale separation requires the combined use of different descriptions at different scale (for instance, a pore-scale description at the micro-scale and a Darcy-like description at the macro-scale) [1,2]. The method is based on conservation principles and constructs the macro-scale problem by numerical averaging of micro-scale balance equations. By employing spatiotemporal adaptive strategies, this approach can efficiently solve large-scale problems [2,3]. In addition, being based on a numerical volume-averaging paradigm, it offers a tool to illuminate how macroscopic equations emerge from microscopic processes, to better understand the meaning of microscopic quantities, and to investigate the validity of the assumptions routinely used to construct the macro-scale problems. [1] Tomin, P., and I. Lunati, A Hybrid Multiscale Method for Two-Phase Flow in Porous Media, Journal of Computational Physics, 250, 293-307, 2013 [2] Tomin, P., and I. Lunati, Local-global splitting and spatiotemporal-adaptive Multiscale Finite Volume Method, Journal of Computational Physics, 280, 214-231, 2015 [3] Tomin, P., and I. Lunati, Spatiotemporal adaptive multiphysics simulations of drainage-imbibition cycles, Computational Geosciences, 2015 (under review)
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
NASA Astrophysics Data System (ADS)
Regenauer-Lieb, Klaus; Veveakis, Manolis; Poulet, Thomas; Paesold, Martin; Rosenbaum, Gideon; Weinberg, Roberto F.; Karrech, Ali
2015-10-01
We propose a new multi-physics, multi-scale Integrated Computational Materials Engineering framework for 'predictive' geodynamic simulations. A first multiscale application is presented that allows linking our existing advanced material characterization methods from nanoscale through laboratory-, field and geodynamic scales into a new rock simulation framework. The outcome of our example simulation is that the diachronous Australian intraplate orogenic events are found to be caused by one and the same process. This is the non-linear progression of a fundamental buckling instability of the Australian intraplate lithosphere subject to long-term compressive forces. We identify four major stages of the instability: (1) a long wavelength elasto-visco-plastic flexure of the lithosphere without localized failure (first 50 Myrs of loading); (2) an incipient thrust on the central hinge of the model (50-90 Myrs); (3) followed by a secondary and tertiary thrust (90-100 Myrs) 200 km away to either side of the central thrust; (4) a progression of subsidiary thrusts advancing towards the central thrust (? Myrs). The model is corroborated by multiscale observations which are: nano-micro CT analysis of deformed samples in the central thrust giving evidence of cavitation and creep fractures in the thrust; mm-cm size veins of melts (pseudotachylite) that are evidence of intermittent shear heating events in the thrust; and 1-10 km width of the thrust - known as the mylonitic Redbank shear zone - corresponding to the width of the steady state solution, where shear heating on the thrust exactly balances heat diffusion.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
A hierarchical multi-physics model for design of high toughness steels
NASA Astrophysics Data System (ADS)
Hao, Su; Moran, Brian; Kam Liu, Wing; Olson, Gregory B.
2003-05-01
In support of the computational design of high toughness steels as hierarchically structured materials, a multiscale, multiphysics methodology is developed for a `ductile fracture simulator.' At the nanometer scale, the method unites continuum mechanics with quantum physics, using first-principles calculations to predict the force-distance laws for interfacial separation with both normal and plastic sliding components. The predicted adhesion behavior is applied to the description of interfacial decohesion for both micron-scale primary inclusions governing primary void formation and submicron-scale secondary particles governing microvoid-based shear localization that accelerates primary void coalescence. Fine scale deformation is described by a `Particle Dynamics' method that extends the framework of molecular dynamics to multi-atom aggregates. This is combined with other meshfree and finite-element methods in two-level cell modeling to provide a hierarchical constitutive model for crack advance, combining conventional plasticity, microstructural damage, strain gradient effects and transformation plasticity from dispersed metastable austenite. Detailed results of a parallel experimental study of a commercial steel are used to calibrate the model at multiple scales. An initial application provides a Toughness-Strength-Adhesion diagram defining the relation among alloy strength, inclusion adhesion energy and fracture toughness as an aid to microstructural design. The analysis of this paper introduces an approach of creative steel design that can be stated as the exploration of the effective connections among the five key-components: elements selection, process design, micro/nanostructure optimization, desirable properties and industrial performance by virtue of innovations and inventions.
NASA Astrophysics Data System (ADS)
Yang, H.
2015-12-01
In coastal Southern California, variation in solar energy production is predominantly due to the presence of stratocumulus clouds (Sc), as they greatly attenuate surface solar irradiance and cover most distributed photovoltaic systems on summer mornings. Correct prediction of the spatial coverage and lifetime of coastal Sc is therefore vital to the accuracy of solar energy forecasts in California. In Weather Research and Forecasting (WRF) model simulations, underprediction of Sc inherent in the initial conditions directly leads to an underprediction of Sc in the resulting forecasts. Hence, preprocessing methods were developed to create initial conditions more consistent with observational data and reduce spin-up time requirements. Mathiesen et al. (2014) previously developed a cloud data assimilation system to force WRF initial conditions to contain cloud liquid water based on CIMSS GOES Sounder cloud cover. The Well-mixed Preprocessor and Cloud Data Assimilation (WEMPPDA) package merges an initial guess of cloud liquid water content obtained from mixed-layer theory with assimilated CIMSS GOES Sounder cloud cover to more accurately represent the spatial coverage of Sc at initialization. The extent of Sc inland penetration is often constrained topographically; therefore, the low inversion base height (IBH) bias in NAM initial conditions decreases Sc inland penetration. The Inversion Base Height (IBH) package perturbs the initial IBH by the difference between model IBH and the 12Z radiosonde measurement. The performance of these multi-initial-condition configurations was evaluated over June, 2013 against SolarAnywhere satellite-derived surface irradiance data. Four configurations were run: 1) NAM initial conditions, 2) RAP initial conditions, 3) WEMPPDA applied to NAM, and 4) IBH applied to NAM. Both preprocessing methods showed significant improvement in the prediction of both spatial coverage and lifetime of coastal Sc. The best performing configuration was then
Sadek, Khaled; Lueke, Jonathan; Moussa, Walied
2009-01-01
In this paper, the reliability of capacitive shunt RF MEMS switches have been investigated using three dimensional (3D) coupled multiphysics finite element (FE) analysis. The coupled field analysis involved three consecutive multiphysics interactions. The first interaction is characterized as a two-way sequential electromagnetic (EM)-thermal field coupling. The second interaction represented a one-way sequential thermal-structural field coupling. The third interaction portrayed a two-way sequential structural-electrostatic field coupling. An automated substructuring algorithm was utilized to reduce the computational cost of the complicated coupled multiphysics FE analysis. The results of the substructured FE model with coupled field analysis is shown to be in good agreement with the outcome of previously published experimental and numerical studies. The current numerical results indicate that the pull-in voltage and the buckling temperature of the RF switch are functions of the microfabrication residual stress state, the switch operational frequency and the surrounding packaging temperature. Furthermore, the current results point out that by introducing proper mechanical approaches such as corrugated switches and through-holes in the switch membrane, it is possible to achieve reliable pull-in voltages, at various operating temperatures. The performed analysis also shows that by controlling the mean and gradient residual stresses, generated during microfabrication, in conjunction with the proposed mechanical approaches, the power handling capability of RF MEMS switches can be increased, at a wide range of operational frequencies. These design features of RF MEMS switches are of particular importance in applications where a high RF power (frequencies above 10 GHz) and large temperature variations are expected, such as in satellites and airplane condition monitoring. PMID:22408490
NASA Astrophysics Data System (ADS)
Cocheteau, N.; Maurel-Pantel, A.; Lebon, F.; Rosu, I.; Ait-Zaid, S.; Savin de Larclause, I.; Salaun, Y.
2014-06-01
Direct bonding is a well-known process. However in order to use this process in spatial instrument fabrication the mechanical resistance needs to be quantified precisely. In order to improve bonded strength, optimal parameters of the process are found by studying the influence of annealing time, temperature and roughness which are studied using three experimental methods: double shear, cleavage and wedge tests. Those parameters are chosen thanks to the appearance of time/temperature equivalence. All results brought out the implementation of a multi-physic model to predict the mechanical behavior of direct bonding interface.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement
Yang, Bo; Hu, Di; Wu, Lei
2016-01-01
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement.
Yang, Bo; Hu, Di; Wu, Lei
2016-07-08
A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)², which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive
NASA Astrophysics Data System (ADS)
Poulet, Thomas; Paesold, Martin; Veveakis, Manolis
2017-03-01
Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.
NASA Astrophysics Data System (ADS)
Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong
2015-10-01
This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.
NASA Astrophysics Data System (ADS)
Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge
2014-08-01
This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.
Multi-physics design and analyses of long life reactors for lunar outposts
NASA Astrophysics Data System (ADS)
Schriener, Timothy M.
event of a launch abort accident. Increasing the amount of fuel in the reactor core, and hence its operational life, would be possible by launching the reactor unfueled and fueling it on the Moon. Such a reactor would, thus, not be subject to launch criticality safety requirements. However, loading the reactor with fuel on the Moon presents a challenge, requiring special designs of the core and the fuel elements, which lend themselves to fueling on the lunar surface. This research investigates examples of both a solid core reactor that would be fueled at launch as well as an advanced concept which could be fueled on the Moon. Increasing the operational life of a reactor fueled at launch is exercised for the NaK-78 cooled Sectored Compact Reactor (SCoRe). A multi-physics design and analyses methodology is developed which iteratively couples together detailed Monte Carlo neutronics simulations with 3-D Computational Fluid Dynamics (CFD) and thermal-hydraulics analyses. Using this methodology the operational life of this compact, fast spectrum reactor is increased by reconfiguring the core geometry to reduce neutron leakage and parasitic absorption, for the same amount of HEU in the core, and meeting launch safety requirements. The multi-physics analyses determine the impacts of the various design changes on the reactor's neutronics and thermal-hydraulics performance. The option of increasing the operational life of a reactor by loading it on the Moon is exercised for the Pellet Bed Reactor (PeBR). The PeBR uses spherical fuel pellets and is cooled by He-Xe gas, allowing the reactor core to be loaded with fuel pellets and charged with working fluid on the lunar surface. The performed neutronics analyses ensure the PeBR design achieves a long operational life, and develops safe launch canister designs to transport the spherical fuel pellets to the lunar surface. The research also investigates loading the PeBR core with fuel pellets on the Moon using a transient Discrete
NASA Astrophysics Data System (ADS)
Bharatish, A.; Narasimha Murthy, H. N.; Aditya, G.; Anand, B.; Satyanarayana, B. S.; Krishna, M.
2015-07-01
This paper presents evaluation of thermal residual stresses in the heat affected zone of laser drilled alumina ceramic by using Micro-Raman spectroscopy. The residual stresses were evaluated for the holes corresponding to the optimal parameters of laser power, scanning speed, frequency and hole diameter. Three such cases were considered for the study. Residual stresses were obtained as a function of the Raman shifts. The nature and magnitude of the residual stresses were indicative of the extent of damage caused in the heat affected zone. In cases where the initial tensile residual stresses exceeded the tensile strength of alumina, cracks were initiated. Laser drilling with higher laser power and lower scanning speed induced initially high compressive and cyclic thermal stresses, causing greater damage to the hole. Transient thermal analysis was performed using COMSOL Multiphysics to predict residual thermal stresses and to validate the micro-Raman results. Scanning Electron Microscopy was used to confirm the damage caused in the heat affected zone.
NASA Astrophysics Data System (ADS)
Sanchez, M. J.; Gens, A.; Jarecki, Z.; Olivella, S.
2012-12-01
This work presents a coupled Thermo-Hydro-Mechanical (THM) formulation developed to handle multiphysic problems in porous media with two dominant void levels. The proposed framework assumes the presence of two porous media linked through a mass transfer term between them. In many cases, the use of a double porosity formulation is more realistic because it is possible to take explicitly into account the different physical phenomena that take place in each void level, and also their mutual interactions. The formulation is especially suitable for cases in which the material exhibits a strong coupling between the mechanical and the hydraulic problem in both media. The problem is approached using a multi-phase, multi-species formulation that expresses mathematically the main coupled thermo-hydro-mechanical phenomena in terms of: balance equations, constitutive equations and equilibrium restrictions. In its more general form, the proposed approach allows the consideration of multiphase flow in the two pore levels coupled with the mechanical problem. The formulation presented is quite open and general, and able to incorporate different constitutive laws for each basic structural level considered; for the mechanical, hydraulic and thermal problems. The double structure formulation has been implemented in the finite element program CODE_BRIGHT and it has been used to analyze a variety of engineering problems associated with the design of radioactive waste disposal in deep geological media and petroleum engineering problems. This work presents two case studies; one is related to oil production in a heterogeneous reservoir, and the other case focuses on the analysis of a repository for nuclear waste in a clayed formation. Both cases show the potential of the proposed formation to tackle coupled multiphysics problems in porous media.
Bodey, Isaac T.; Curtis, Franklin G.; Arimilli, Rao V.; Ekici, Kivanc; Freels, James D.
2015-11-01
The findings presented in this report are results of a five year effort led by the RRD Division of the ORNL, which is focused on research and development toward the conversion of the High Flux Isotope Reactor (HFIR) fuel from high-enriched uranium (HEU) to low-enriched uranium (LEU). This report focuses on the tasks accomplished by the University of Tennessee Knoxville (UTK) team from the Department of Mechanical, Aerospace, and Biomedical Engineering (MABE) that provided expert support in multiphysics modeling of complex problems associated with the LEU conversion of the HFIR reactor. The COMSOL software was used as the main computational modeling tool, whereas Solidworks was also used in support of computer-aided-design (CAD) modeling of the proposed LEU fuel design. The UTK research has been governed by a statement of work (SOW), which was updated annually to clearly define the specific tasks reported herein. Ph.D. student Isaac T. Bodey has focused on heat transfer and fluid flow modeling issues and has been aided by his major professor Dr. Rao V. Arimilli. Ph.D. student Franklin G. Curtis has been focusing on modeling the fluid-structure interaction (FSI) phenomena caused by the mechanical forces acting on the fuel plates, which in turn affect the fluid flow in between the fuel plates, and ultimately the heat transfer, is also affected by the FSI changes. Franklin Curtis has been aided by his major professor Dr. Kivanc Ekici. M.Sc. student Adam R. Travis has focused two major areas of research: (1) on accurate CAD modeling of the proposed LEU plate design, and (2) reduction of the model complexity and dimensionality through interdimensional coupling of the fluid flow and heat transfer for the HFIR plate geometry. Adam Travis is also aided by his major professor, Dr. Kivanc Ekici. We must note that the UTK team, and particularly the graduate students, have been in very close collaboration with Dr. James D. Freels (ORNL technical monitor and mentor) and have
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Accurate ab Initio Spin Densities.
Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus
2012-06-12
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].
3-D model of thermo-fluid and electrochemical for planar SOFC
NASA Astrophysics Data System (ADS)
Wang, Guilan; Yang, Yunzhen; Zhang, Haiou; Xia, Weisheng
A numerical simulation tool for calculating the planar solid oxide fuel cells was described. The finite volume method was employed for the simulation, which was on the basis of the fundamental conservation laws of mass, momentum, energy and electrical charge. Temperature distributions, molar concentrations of gaseous species, current density and over potential were calculated using a single cell unit model with double channels of co-flow and counter-flow cases. The influences of operating conditions and anode structure on the performances of SOFC were also discussed. Simulation results show that the co-flow case has more uniform temperature and current density distributions and smaller temperature gradients, thus offers thermostructural advantages than the counter-flow case. Moreover, in co-flow case, with the increasing of delivery rate, temperature and hydrogen mass fraction of fuel, average temperature of PEN, current density and activation potential also rise. However, with increasing the delivery rate of air, average temperature of PEN decreases. In particular, it is effective to improve the output voltage by reducing the thickness of anode or increasing its porosity.
Human heart conjugate cooling simulation: Unsteady thermo-fluid-stress analysis
Abdoli, Abas; Dulikravich, George S.; Bajaj, Chandrajit; Stowe, David F.; Jahania, M. Salik
2015-01-01
The main objective of this work was to demonstrate computationally that realistic human hearts can be cooled much faster by performing conjugate heat transfer consisting of pumping a cold liquid through the cardiac chambers and major veins while keeping the heart submerged in cold gelatin filling a cooling container. The human heart geometry used for simulations was obtained from three-dimensional, high resolution MRI scans. Two fluid flow domains for the right (pulmonic) and left (systemic) heart circulations, and two solid domains for the heart tissue and gelatin solution were defined for multi-domain numerical simulation. Detailed unsteady temperature fields within the heart tissue were calculated during the conjugate cooling process. A linear thermoelasticity analysis was performed to assess the stresses applied on the heart due to the coolant fluid shear and normal forces and to examine the thermal stress caused by temperature variation inside the heart. It was demonstrated that a conjugate cooling effort with coolant temperature at +4°C is capable of reducing the average heart temperature from +37°C to +8°C in 25 minutes for cases in which the coolant was steadily pumped only through major heart inlet veins and cavities. PMID:25045006
Human heart conjugate cooling simulation: unsteady thermo-fluid-stress analysis.
Abdoli, Abas; Dulikravich, George S; Bajaj, Chandrajit; Stowe, David F; Jahania, M Salik
2014-11-01
The main objective of this work was to demonstrate computationally that realistic human hearts can be cooled much faster by performing conjugate heat transfer consisting of pumping a cold liquid through the cardiac chambers and major veins while keeping the heart submerged in cold gelatin filling a cooling container. The human heart geometry used for simulations was obtained from three-dimensional, high resolution CT-angio scans. Two fluid flow domains for the right (pulmonic) and left (systemic) heart circulations, and two solid domains for the heart tissue and gelatin solution were defined for multi-domain numerical simulation. Detailed unsteady temperature fields within the heart tissue were calculated during the conjugate cooling process. A linear thermoelasticity analysis was performed to assess the stresses applied on the heart due to the coolant fluid shear and normal forces and to examine the thermal stress caused by temperature variation inside the heart. It was demonstrated that a conjugate cooling effort with coolant temperature at +4°C is capable of reducing the average heart temperature from +37°C to +8°C in 25 minutes for cases in which the coolant was steadily pumped only through major heart inlet veins and cavities.
Transient Thermo-fluid Model of Meniscus Behavior and Slag Consumption in Steel Continuous Casting
NASA Astrophysics Data System (ADS)
Jonayat, A. S. M.; Thomas, Brian G.
2014-10-01
The behavior of the slag layer between the oscillating mold wall, the slag rim, the slag/liquid steel interface, and the solidifying steel shell, is of immense importance for the surface quality of continuous-cast steel. A computational model of the meniscus region has been developed, that includes transient heat transfer, multi-phase fluid flow, solidification of the slag, and movement of the mold during an oscillation cycle. First, the model is applied to a lab experiment done with a "mold simulator" to verify the transient temperature-field predictions. Next, the model is verified by matching with available literature and plant measurements of slag consumption. A reasonable agreement has been observed for both temperature and flow-field. The predictions show that transient temperature behavior depends on the location of the thermocouple during the oscillation relative to the meniscus. During an oscillation cycle, heat transfer variations in a laboratory frame of reference are more severe than experienced by the moving mold thermocouples, and the local heat transfer rate is increased greatly when steel overflows the meniscus. Finally, the model is applied to conduct a parametric study on the effect of casting speed, stroke, frequency, and modification ratio on slag consumption. Slag consumption per unit area increases with increase of stroke and modification ratio, and decreases with increase of casting speed while the relation with frequency is not straightforward. The match between model predictions and literature trends suggests that this methodology can be used for further investigations.
Modeling of plasma and thermo-fluid transport in hybrid welding
NASA Astrophysics Data System (ADS)
Ribic, Brandon D.
Hybrid welding combines a laser beam and electrical arc in order to join metals within a single pass at welding speeds on the order of 1 m min -1. Neither autonomous laser nor arc welding can achieve the weld geometry obtained from hybrid welding for the same process parameters. Depending upon the process parameters, hybrid weld depth and width can each be on the order of 5 mm. The ability to produce a wide weld bead increases gap tolerance for square joints which can reduce machining costs and joint fitting difficulty. The weld geometry and fast welding speed of hybrid welding make it a good choice for application in ship, pipeline, and aerospace welding. Heat transfer and fluid flow influence weld metal mixing, cooling rates, and weld bead geometry. Cooling rate affects weld microstructure and subsequent weld mechanical properties. Fluid flow and heat transfer in the liquid weld pool are affected by laser and arc energy absorption. The laser and arc generate plasmas which can influence arc and laser energy absorption. Metal vapors introduced from the keyhole, a vapor filled cavity formed near the laser focal point, influence arc plasma light emission and energy absorption. However, hybrid welding plasma properties near the opening of the keyhole are not known nor is the influence of arc power and heat source separation understood. A sound understanding of these processes is important to consistently achieving sound weldments. By varying process parameters during welding, it is possible to better understand their influence on temperature profiles, weld metal mixing, cooling rates, and plasma properties. The current literature has shown that important process parameters for hybrid welding include: arc power, laser power, and heat source separation distance. However, their influence on weld temperatures, fluid flow, cooling rates, and plasma properties are not well understood. Modeling has shown to be a successful means of better understanding the influence of processes parameters on heat transfer, fluid flow, and plasma characteristics for arc and laser welding. However, numerical modeling of laser/GTA hybrid welding is just beginning. Arc and laser welding plasmas have been previously analyzed successfully using optical emission spectroscopy in order to better understand arc and laser plasma properties as a function of plasma radius. Variation of hybrid welding plasma properties with radial distance is not known. Since plasma properties can affect arc and laser energy absorption and weld integrity, a better understanding of the change in hybrid welding plasma properties as a function of plasma radius is important and necessary. Material composition influences welding plasma properties, arc and laser energy absorption, heat transfer, and fluid flow. The presence of surface active elements such as oxygen and sulfur can affect weld pool fluid flow and bead geometry depending upon the significance of heat transfer by convection. Easily vaporized and ionized alloying elements can influence arc plasma characteristics and arc energy absorption. The effects of surface active elements on heat transfer and fluid flow are well understood in the case of arc and conduction mode laser welding. However, the influence of surface active elements on heat transfer and fluid flow during keyhole mode laser welding and laser/arc hybrid welding are not well known. Modeling has been used to successfully analyze the influence of surface active elements during arc and conduction mode laser welding in the past and offers promise in the case of laser/arc hybrid welding. A critical review of the literature revealed several important areas for further research and unanswered questions. (1) The understanding of heat transfer and fluid flow during hybrid welding is still beginning and further research is necessary. (2) Why hybrid welding weld bead width is greater than that of laser or arc welding is not well understood. (3) The influence of arc power and heat source separation distance on cooling rates during hybrid welding are not known. (4) Convection during hybrid welding is not well understood despite its importance to weld integrity. (5) The influence of surface active elements on weld geometry, weld pool temperatures, and fluid flow during high power density laser and laser/arc hybrid welding are not known. (6) Although the arc power and heat source separation distance have been experimentally shown to influence arc stability and plasma light emission during hybrid welding, the influence of these parameters on plasma properties is unknown. (7) The electrical conductivity of hybrid welding plasmas is not known, despite its importance to arc stability and weld integrity. In this study, heat transfer and fluid flow are analyzed for laser, gas tungsten arc (GTA), and laser/GTA hybrid welding using an experimentally validated three dimensional phenomenological model. By evaluating arc and laser welding using similar process parameters, a better understanding of the hybrid welding process is expected. The role of arc power and heat source separation distance on weld depth, weld pool centerline cooling rates, and fluid flow profiles during CO2 laser/GTA hybrid welding of 321 stainless steel are analyzed. Laser power is varied for a constant heat source separation distance to evaluate its influence on weld temperatures, weld geometry, and fluid flow during Nd:YAG laser/GTA hybrid welding of A131 structural steel. The influence of oxygen and sulfur on keyhole and weld bead geometry, weld temperatures, and fluid flow are analyzed for high power density Yb doped fiber laser welding of (0.16 %C, 1.46 %Mn) mild steel. Optical emission spectroscopy was performed on GTA, Nd:YAG laser, and Nd:YAG laser/GTA hybrid welding plasmas for welding of 304L stainless steel. Emission spectroscopy provides a means of determining plasma temperatures and species densities using deconvoluted measured spectral intensities, which can then be used to calculate plasma electrical conductivity. In this study, hybrid welding plasma temperatures, species densities, and electrical conductivities were determined using various heat source separation distances and arc currents using an analytical method coupled calculated plasma compositions. As a result of these studies heat transfer by convection was determined to be dominant during hybrid welding of steels. The primary driving forces affecting hybrid welding fluid flow are the surface tension gradient and electromagnetic force. Fiber laser weld depth showed a negligible change when increasing the (0.16 %C, 1.46 %Mn) mild steel sulfur concentration from 0.006 wt% to 0.15 wt%. Increasing the dissolved oxygen content in weld pool from 0.0038 wt% to 0.0257 wt% increased the experimental weld depth from 9.3 mm to 10.8 mm. Calculated partial pressure of carbon monoxide increased from 0.1 atm to 0.75 atm with the 0.0219 wt% increase in dissolved oxygen in the weld metal and may explain the increase in weld depth. Nd:YAG laser/GTA hybrid welding plasma temperatures were calculated to be approximately between 7927 K and 9357 K. Increasing the Nd:YAG laser/GTA hybrid welding heat source separation distance from 4 mm to 6 mm reduced plasma temperatures between 500 K and 900 K. Hybrid welding plasma total electron densities and electrical conductivities were on the order of 1 x 1022 m-3 and 3000 S m-1, respectively.
Modelling and Simulation of 4D GeoPET Measurements with COMSOL Multiphysics 4.2a
NASA Astrophysics Data System (ADS)
Schikora, J.; Kulenkampff, J.; Gründig, M.; Lippmann-Pipke, J.
2012-04-01
Our GeoPET-method allows the 4D monitoring of (reactive) transport processes in geological material on laboratory scale (Gründig et al., 2007; Kulenkampff et al., 2008; Richter et al., 2005) by quantitative imaging of tracer concentrations. Recently we have conducted a long-term 22Na+ in-diffusion experiment in an Opalinus clay drill core over a period of 7 months. We modelled this experiment with COMSOL Multiphysics ® 4.2a (3D convection-diffusion equation, PDE mode, PARDISO solver) for reproducing the observed spatiotemporal concentration distribution data with the following underlying equation for this anisotropic diffusion and adsorption: ɛdci= \\upsidedownBigTriangle (D ·\\upsidedownBigTriangle c )- ρdq- dt e i dt - ɛ [-] porosity, ci[mol/m3] 22Na+concentration, De [m2/s] tensor of the effective diffusion constant for 22Na+ in Opalinus clay, ρ [kg/m3] bulk density and dq/dt sink term for considering the sorption. By importing GeoPET images from various time steps and applying the Optimization Module (least square fit applying the Levenberg-Marquardt algorithm) to these images we efficiently determined best fit values e.g. of the diffusion tensor. Combined with the parameter sweep operation the sensitivity analysis is performed in parallel and covers the range of literature values for porosity and Kd values for 22Na+sorption on Opalinus clay. The experimental data could be reproduced quite well, but the obtained parameter values for diffusion parallel and normal to the bedding are slightly larger than reported in Gimmi and Kosakowski (2011). This is coherent with our observations of an emerging gas bubble in the central borehole tracer reservoir: Soil moisture tension in the partly unsaturated clay must have significantly influenced the transport regime by an additional advective component. We suggest COMSOL Multiphysics ® is a powerful tool for the inverse modelling of timedependent, multidimensional experimental data as obtained by GeoPET.
William Martin
2012-11-16
A new method to obtain Doppler broadened cross sections has been implemented into MCNP, removing the need to generate cross sections for isotopes at problem temperatures. Previous work had established the scientific feasibility of obtaining Doppler-broadened cross sections "on-the-fly" (OTF) during the random walk of the neutron. Thus, when a neutron of energy E enters a material region that is at some temperature T, the cross sections for that material at the exact temperature T are immediately obtained by interpolation using a high order functional expansion for the temperature dependence of the Doppler-broadened cross section for that isotope at the neutron energy E. A standalone Fortran code has been developed that generates the OTF library for any isotope that can be processed by NJOY. The OTF cross sections agree with the NJOY-based cross sections for all neutron energies and all temperatures in the range specified by the user, e.g., 250K - 3200K. The OTF methodology has been successfully implemented into the MCNP Monte Carlo code and has been tested on several test problems by comparing MCNP with conventional ACE cross sections versus MCNP with OTF cross sections. The test problems include the Doppler defect reactivity benchmark suite and two full-core VHTR configurations, including one with multiphysics coupling using RELAP5-3D/ATHENA for the thermal-hydraulic analysis. The comparison has been excellent, verifying that the OTF libraries can be used in place of the conventional ACE libraries generated at problem temperatures. In addition, it has been found that using OTF cross sections greatly reduces the complexity of the input for MCNP, especially for full-core temperature feedback calculations with many temperature regions. This results in an order of magnitude decrease in the number of input lines for full-core configurations, thus simplifying input preparation and reducing the potential for input errors. Finally, for full-core problems with multiphysics
Zhai, Y.; Loesser, G.; Smith, M.; Udintsev, V.; Giacomin, T., T.; Khodak, A.; Johnson, D,; Feder, R,
2015-07-01
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses were performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.
NASA Astrophysics Data System (ADS)
Soleimani, Meisam; Wriggers, Peter; Rath, Henryke; Stiesch, Meike
2016-10-01
In this paper, a 3D computational model has been developed to investigate biofilms in a multi-physics framework using smoothed particle hydrodynamics (SPH) based on a continuum approach. Biofilm formation is a complex process in the sense that several physical phenomena are coupled and consequently different time-scales are involved. On one hand, biofilm growth is driven by biological reaction and nutrient diffusion and on the other hand, it is influenced by fluid flow causing biofilm deformation and interface erosion in the context of fluid and deformable solid interaction. The geometrical and numerical complexity arising from these phenomena poses serious complications and challenges in grid-based techniques such as finite element. Here the solution is based on SPH as one of the powerful meshless methods. SPH based computational modeling is quite new in the biological community and the method is uniquely robust in capturing the interface-related processes of biofilm formation such as erosion. The obtained results show a good agreement with experimental and published data which demonstrates that the model is capable of simulating and predicting overall spatial and temporal evolution of biofilm.
A multi-physics study of Li-ion battery material Li1+xTi2O4
NASA Astrophysics Data System (ADS)
Jiang, Tonghu; Falk, Michael; Siva Shankar Rudraraju, Krishna; Garikipati, Krishna; van der Ven, Anton
2013-03-01
Recently, lithium ion batteries have been subject to intense scientific study due to growing demand arising from their utilization in portable electronics, electric vehicles and other applications. Most cathode materials in lithium ion batteries involve a two-phase process during charging and discharging, and the rate of these processes is typically limited by the slow interface mobility. We have undertaken modeling regarding how lithium diffusion in the interface region affects the motion of the phase boundary. We have developed a multi-physics computational method suitable for predicting time evolution of the driven interface. In this method, we calculate formation energies and migration energy barriers by ab initio methods, which are then approximated by cluster expansions. Monte Carlo calculation is further employed to obtain thermodynamic and kinetic information, e.g., anisotropic interfacial energies, and mobilities, which are used to parameterize continuum modeling of the charging and discharging processes. We test this methodology on spinel Li1+xTi2O4. Elastic effects are incorporated into the calculations to determine the effect of variations in modulus and strain on stress concentrations and failure modes within the material. We acknowledge support by the National Science Foundation Cyber Discovery and Innovation Program under Award No. 1027765.
Richard, Joshua; Galloway, Jack; Fensin, Michael; Trellue, Holly
2015-04-04
A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from the combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.
Richard, Joshua; Galloway, Jack; Fensin, Michael; ...
2015-04-04
A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from themore » combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.« less
Goos-Hänchen shifts at a resonance angle of a two-prism structure using COMSOL multiphysics
NASA Astrophysics Data System (ADS)
Zhang, Wenjing; Zhang, Zhiwei; Yang, Peng; Zhu, Xiang; Dai, Yifan
2016-10-01
We simulated and analyzed Goos-Hänchen (GH) shifts of 633 nm polarized light through a two-prism structure, consisting of a right triangle prism and an isosceles triangle prism with Kretschmann-Raether configuration, by comparing the results from COMSOL Multiphysics (CM) simulation software with that of a stationary-phase analysis (SPA). For this two-prism structure, using a gold film that of thickness 45 nm, the maximum positive GH shift, obtained using SPA at the resonance angle of 44.1°, was 354 μm. Using CM at an incident angle of 43.8°, we found the maximum positive GH shift of 9.45 μm. The results obtained using CM are in agreement with those obtained by the SPA around the resonance angle, although the enhancement effect from CM is much less than that of SPA. This is because SPA depends on the differentiation of the phase shift with respect to the incident angle, while a drastic phase shift occurs at the resonance angle. These results are useful for designing high-sensitivity SPR sensors based on GH shift measurement and for application in waveguide-type SPR devices, with sizes in the order of micro millimeter.
NASA Astrophysics Data System (ADS)
Kuroda, Shinjiro; Suzuki, Naoya; Tanigawa, Hiroshi; Suzuki, Kenichiro
2013-06-01
In this paper, we present and demonstrate the principle of variable resonance frequency selection by using a fishbone-shaped microelectromechanical system (MEMS) resonator. To analyze resonator displacement caused by an electrostatic force, a multi-physics simulation, which links the applied voltage load to the mechanical domain, is carried out. The simulation clearly shows that resonators are operated by three kinds of electrostatic force exerted on the beam. A new frequency selection algorithm that selects only one among various resonant modes is also presented. The conversion matrix that transforms the voltages applied to each driving electrode into the resonant beam displacement at each resonant mode is first derived by experimental measurements. Following this, the matrix is used to calculate a set of voltages for maximizing the rejection ratio in each resonant mode. This frequency selection method is applied in a fishbone-shaped MEMS resonator with five driving electrodes and the frequency selection among the 1st resonant mode to the 5th resonant mode is successfully demonstrated. From a fine adjustment of the voltage set, a 42 dB rejection ratio is obtained.
Powell, Adam; Pati, Soobhankar
2012-03-11
Solid Oxide Membrane (SOM) Electrolysis is a new energy-efficient zero-emissions process for producing high-purity magnesium and high-purity oxygen directly from industrial-grade MgO. SOM Recycling combines SOM electrolysis with electrorefining, continuously and efficiently producing high-purity magnesium from low-purity partially oxidized scrap. In both processes, electrolysis and/or electrorefining take place in the crucible, where raw material is continuously fed into the molten salt electrolyte, producing magnesium vapor at the cathode and oxygen at the inert anode inside the SOM. This paper describes a three-dimensional multi-physics finite-element model of ionic current, fluid flow driven by argon bubbling and thermal buoyancy, and heat and mass transport in the crucible. The model predicts the effects of stirring on the anode boundary layer and its time scale of formation, and the effect of natural convection at the outer wall. MOxST has developed this model as a tool for scale-up design of these closely-related processes.
Applicability extent of 2-D heat equation for numerical analysis of a multiphysics problem
NASA Astrophysics Data System (ADS)
Khawaja, H.
2017-01-01
This work focuses on thermal problems, solvable using the heat equation. The fundamental question being answered here is: what are the limits of the dimensions that will allow a 3-D thermal problem to be accurately modelled using a 2-D Heat Equation? The presented work solves 2-D and 3-D heat equations using the Finite Difference Method, also known as the Forward-Time Central-Space (FTCS) method, in MATLAB®. For this study, a cuboidal shape domain with a square cross-section is assumed. The boundary conditions are set such that there is a constant temperature at its center and outside its boundaries. The 2-D and 3-D heat equations are solved in a time dimension to develop a steady state temperature profile. The method is tested for its stability using the Courant-Friedrichs-Lewy (CFL) criteria. The results are compared by varying the thickness of the 3-D domain. The maximum error is calculated, and recommendations are given on the applicability of the 2-D heat equation.
Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen
2007-01-01
The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
Turinsky, Paul
2015-02-09
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial
Pouran, Behdad; Arbabi, Vahid; Weinans, Harrie; Zadpoor, Amir A
2016-11-01
Transport of solutes helps to regulate normal physiology and proper function of cartilage in diarthrodial joints. Multiple studies have shown the effects of characteristic parameters such as concentration of proteoglycans and collagens and the orientation of collagen fibrils on the diffusion process. However, not much quantitative information and accurate models are available to help understand how the characteristics of the fluid surrounding articular cartilage influence the diffusion process. In this study, we used a combination of micro-computed tomography experiments and biphasic-solute finite element models to study the effects of three parameters of the overlying bath on the diffusion of neutral solutes across cartilage zones. Those parameters include bath size, degree of stirring of the bath, and the size and concentration of the stagnant layer that forms at the interface of cartilage and bath. Parametric studies determined the minimum of the finite bath size for which the diffusion behavior reduces to that of an infinite bath. Stirring of the bath proved to remarkably influence neutral solute transport across cartilage zones. The well-stirred condition was achieved only when the ratio of the diffusivity of bath to that of cartilage was greater than ≈1000. While the thickness of the stagnant layer at the cartilage-bath interface did not significantly influence the diffusion behavior, increase in its concentration substantially elevated solute concentration in cartilage. Sufficient stirring attenuated the effects of the stagnant layer. Our findings could be used for efficient design of experimental protocols aimed at understanding the transport of molecules across articular cartilage.
Trujillo, Francisco J; Eberhardt, Sebastian; Möller, Dirk; Dual, Jurg; Knoerzer, Kai
2013-03-01
A model was developed to determine the local changes of concentration of particles and the formations of bands induced by a standing acoustic wave field subjected to a sawtooth frequency ramping pattern. The mass transport equation was modified to incorporate the effect of acoustic forces on the concentration of particles. This was achieved by balancing the forces acting on particles. The frequency ramping was implemented as a parametric sweep for the time harmonic frequency response in time steps of 0.1s. The physics phenomena of piezoelectricity, acoustic fields and diffusion of particles were coupled and solved in COMSOL Multiphysics™ (COMSOL AB, Stockholm, Sweden) following a three step approach. The first step solves the governing partial differential equations describing the acoustic field by assuming that the pressure field achieves a pseudo steady state. In the second step, the acoustic radiation force is calculated from the pressure field. The final step allows calculating the locally changing concentration of particles as a function of time by solving the modified equation of particle transport. The diffusivity was calculated as function of concentration following the Garg and Ruthven equation which describes the steep increase of diffusivity when the concentration approaches saturation. However, it was found that this steep increase creates numerical instabilities at high voltages (in the piezoelectricity equations) and high initial particle concentration. The model was simplified to a pseudo one-dimensional case due to computation power limitations. The predicted particle distribution calculated with the model is in good agreement with the experimental data as it follows accurately the movement of the bands in the centre of the chamber.
Freels, James D; Jain, Prashant K
2011-01-01
A research and development project is ongoing to convert the currently operating High Flux Isotope Reactor (HFIR) of Oak Ridge National Laboratory (ORNL) from highly-enriched Uranium (HEU U3O8) fuel to low-enriched Uranium (LEU U-10Mo) fuel. Because LEU HFIR-specific testing and experiments will be limited, COMSOL is chosen to provide the needed multiphysics simulation capability to validate against the HEU design data and calculations, and predict the performance of the LEU fuel for design and safety analyses. The focus of this paper is on the unique issues associated with COMSOL modeling of the 3D geometry, meshing, and solution of the HFIR fuel plate and assembled fuel elements. Two parallel paths of 3D model development are underway. The first path follows the traditional route through examination of all flow and heat transfer details using the Low-Reynolds number k-e turbulence model provided by COMSOL v4.2. The second path simplifies the fluid channel modeling by taking advantage of the wealth of knowledge provided by decades of design and safety analyses, data from experiments and tests, and HFIR operation. By simplifying the fluid channel, a significant level of complexity and computer resource requirements are reduced, while also expanding the level and type of analysis that can be performed with COMSOL. Comparison and confirmation of validity of the first (detailed) and second (simplified) 3D modeling paths with each other, and with available data, will enable an expanded level of analysis. The detailed model will be used to analyze hot-spots and other micro fuel behavior events. The simplified model will be used to analyze events such as routine heat-up and expansion of the entire fuel element, and flow blockage. Preliminary, coarse-mesh model results of the detailed individual fuel plate are presented. Examples of the solution for an entire fuel element consisting of multiple individual fuel plates produced by the simplified model are also presented.
Tang, Dalin; Yang, Chun; Geva, Tal; Rathod, Rahul; Yamauchi, Haruo; Gooty, Vasu; Tang, Alexander; Kural, Mehmet H.; Billiar, Kristen L.; Gaudette, Glenn; del Nido, Pedro J.
2012-01-01
Patients with repaired tetralogy of Fallot account for the majority of cases with late onset right ventricle (RV) failure. A new surgical procedure placing an elastic band in the right ventricle is proposed to improve RV function measured by ejection fraction. A multiphysics modeling approach is developed to combine cardiac magnetic resonance imaging, modeling, tissue engineering and mechanical testing to demonstrate feasibility of the new surgical procedure. Our modeling results indicated that the new surgical procedure has the potential to improve right ventricle ejection fraction by 2–7% which compared favorably with recently published drug trials to treat LV heart failure. PMID:23667272
NASA Astrophysics Data System (ADS)
Shao, Wei; Bogaard, Thom; Bakker, Mark; Berti, Matteo
2014-05-01
The accuracy of using hydrological-slope stability models for rainfall-induced landslide forecasting relies on the identification of realistic landslide triggering mechanisms and the correct mathematical description of these mechanisms. The subsurface hydrological processes in a highly heterogeneous slope are controlled by complex geological conditions. Preferential flow through macropores, fractures and other local high-permeability zones can change the infiltration pattern, resulting in more rapid and deeper water movement. Preferential flow has significant impact on pore water pressure distribution and consequently on slope stability. Increasingly sophisticated theories and models have been developed to simulate preferential flow in various environmental systems. It is necessary to integrate methods of slope stability analysis with preferential flow models, such as dual-permeability models, to investigate the hydrological and soil mechanical response to precipitation in landslide areas. In this study, a systematic modeling approach is developed by using COMSOL Multiphysics to couple a single-permeability model and a dual-permeability model with a soil mechanical model for slope stability analysis. The dual-permeability model is composed of two Richards equations to describe coupled matrix and preferential flow, which can be used to quantify the influence of preferential flow on distribution and timing of pressure head in a slope. The hydrological models are coupled with a plane-strain elastic soil mechanics model and a local factor of safety method. The factor of safety is evaluated by applying the Mohr-Coulomb failure criterion on the effective stress field. The method is applied to the Rocca Pitigliana landslide located roughly 50 km south of Bologna. The landslide material consists of weathered clay with a thickness of 2-4m overlying clay-shale bedrock. Three years of field data of pore pressure measurements provide a reliable description of the dynamic
NASA Astrophysics Data System (ADS)
Jerez, Sonia; Montavez, Juan P.; Gomez-Navarro, Juan J.; Jimenez-Guerrero, Pedro; Lorente, Raquel; Garcia-Valero, Juan A.; Jimenez, Pedro A.; Gonzalez-Rouco, Jose F.; Zorita, Eduardo
2010-05-01
Regional climate change projections are affected by several sources of uncertainty. Some of them come from Global Circulation Models and scenarios.; others come from the downscaling process. In the case of dynamical downscaling, mainly using Regional Climate Models (RCM), the sources of uncertainty may involve nesting strategies, related to the domain position and resolution, soil characterization, internal variability, methods of solving the equations, and the configuration of model physics. Therefore, a probabilistic approach seems to be recommendable when projecting regional climate change. This problem is usually faced by performing an ensemble of simulations. The aim of this study is to evaluate the range of uncertainty in regional climate projections associated to changing the physical configuration in a RCM (MM5) as well as the capability when reproducing the observed climate. This study is performed over the Iberian Peninsula and focuses on the reproduction of the Probability Density Functions (PDFs) of daily mean temperature. The experiments consist on a multi-physics ensemble of high resolution climate simulations (30 km over the target region) for the periods 1970-1999 (present) and 2070-2099 (future). Two sets of simulations for the present have been performed using ERA40 (MM5-ERA40) and ECHAM5-3CM run1 (MM5-E5-PR) as boundary conditions. The future the experiments are driven by ECHAM5-A2-run1 (MM5-E5-A2). The ensemble has a total of eight members, as the result of combining the schemes for PBL (MRF and ETA), cumulus (GRELL and Kain-Fritch) and microphysics (Simple-Ice and Mixed phase). In a previous work this multi-physics ensemble has been analyzed focusing on the seasonal mean values of both temperature and precipitation. The main results indicate that those physics configurations that better reproduce the observed climate project the most dramatic changes for the future (i.e, the largest temperature increase and precipitation decrease). Among the
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Salko, Robert K; Schmidt, Rodney; Avramova, Maria N
2014-01-01
This paper describes major improvements to the computational infrastructure of the CTF sub-channel code so that full-core sub-channel-resolved simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy (DOE) Consortium for Advanced Simulations of Light Water (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations--including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices--are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a Single Program Multiple Data (SPMD) parallelization strategy targeting distributed memory Multiple Instruction Multiple Data (MIMD) platforms and utilizing domain-decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard MPI calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pre-processor utility that takes a greatly reduced set of user input over the traditional CTF input file. To run CTF in parallel, two additional libraries are currently needed; MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is leveraged to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full core, sub-channel-resolved model of Watts Bar Unit 1 under hot full-power conditions (193 17x17
NASA Astrophysics Data System (ADS)
Hébert, Alain
2014-06-01
We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.
NASA Astrophysics Data System (ADS)
Jin, Xinfang; Zhao, Xuan; Huang, Kevin
2015-04-01
A high-fidelity two-dimensional axial symmetrical multi-physics model is described in this paper as an effort to simulate the cycle performance of a recently discovered solid oxide metal-air redox battery (SOMARB). The model collectively considers mass transport, charge transfer and chemical redox cycle kinetics occurring across the components of the battery, and is validated by experimental data obtained from independent research. In particular, the redox kinetics at the energy storage unit is well represented by Johnson-Mehl-Avrami-Kolmogorov (JMAK) and Shrinking Core models. The results explicitly show that the reduction of Fe3O4 during the charging cycle limits the overall performance. Distributions of electrode potential, overpotential, Nernst potential, and H2/H2O-concentration across various components of the battery are also systematically investigated.
Jin, XF; Zhao, X; Huang, K
2015-04-15
A high-fidelity two-dimensional axial symmetrical multi-physics model is described in this paper as an effort to simulate the cycle performance of a recently discovered solid oxide metal-air redox battery (SOMARB). The model collectively considers mass transport, charge transfer and chemical redox cycle kinetics occurring across the components of the battery, and is validated by experimental data obtained from independent research. In particular, the redox kinetics at the energy storage unit is well represented by Johnson-Mehl-Avrami-Kolmogorov (JIVIAK) and Shrinking Core models. The results explicitly show that the reduction of Fe3O4 during the charging cycle limits the overall performance. Distributions of electrode potential, overpotential, Nernst potential, and H-2/H2O-concentration across various components of the battery are also systematically investigated. (C) 2015 Elsevier B.V. All rights reserved.
Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki
2015-12-31
A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integrated into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.
NASA Astrophysics Data System (ADS)
Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki
2015-12-01
A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integrated into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.
NASA Astrophysics Data System (ADS)
Alsharif, Sarah; Farhan, Hanaa; Al-Jawhari, Hala
2017-01-01
A 3D model for p-type Cu2O thin-film transistor (TFT) was simulated for the first time using COMSOL Multiphysics. The main objective of this modeling is to investigate the effect of patterning either the channel or the gate on the performance of Cu2O TFTs. Considering the ideal case, where traps and leakage current are not incorporated, we compared the performance of three different designs; unpatterned, patterned channel and patterned channel and gate TFTs. In each case, the transfer curve, output characteristics, current flow and potential distribution were clarified. The comparison between main parameters showed that the unpatterned model overestimated the field effect mobility µFE by 37.4% over the fully patterned TFT, nevertheless, the latter exhibited the highest on/off current ratio and the lowest off-current. A simulation of experimental output characteristics reported for Cu2O TFT was performed to check the model viability.
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate colorimetric feedback for RGB LED clusters
NASA Astrophysics Data System (ADS)
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
An Accurate, Simplified Model Intrabeam Scattering
Bane, Karl LF
2002-05-23
Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
On accurate determination of contact angle
NASA Technical Reports Server (NTRS)
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Determining accurate distances to nearby galaxies
NASA Astrophysics Data System (ADS)
Bonanos, Alceste Zoe
2005-11-01
Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Accurate taxonomic assignment of short pyrosequencing reads.
Clemente, José C; Jansson, Jesper; Valiente, Gabriel
2010-01-01
Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Sparse and accurate high resolution SAR imaging
NASA Astrophysics Data System (ADS)
Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian
2012-05-01
We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Anderson, Kyle R.; Poland, Michael
2016-01-01
Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35–100% between 2001 and 2006 (from 0.11–0.17 to 0.18–0.28 km3/yr), before subsequently decreasing to 0.08–0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60–150% between
NASA Astrophysics Data System (ADS)
Anderson, Kyle R.; Poland, Michael P.
2016-08-01
Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35-100% between 2001 and 2006 (from 0.11-0.17 to 0.18-0.28 km3/yr), before subsequently decreasing to 0.08-0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60-150% between 2001 and
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Apparatus for accurately measuring high temperatures
Smith, Douglas D.
1985-01-01
The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
LSM: perceptually accurate line segment merging
NASA Astrophysics Data System (ADS)
Hamid, Naila; Khan, Nazar
2016-11-01
Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.
Highly accurate articulated coordinate measuring machine
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-12-30
Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Obtaining accurate translations from expressed sequence tags.
Wasmuth, James; Blaxter, Mark
2009-01-01
The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
Accurate radio positions with the Tidbinbilla interferometer
NASA Technical Reports Server (NTRS)
Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.
1979-01-01
The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.
Magnetic ranging tool accurately guides replacement well
Lane, J.B.; Wesson, J.P. )
1992-12-21
This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.
NASA Astrophysics Data System (ADS)
Afzal, Bushra; Noor Afzal Team; Bushra Afzal Team
2014-11-01
The momentum and thermal turbulent boundary layers over a continuous moving sheet subjected to a free stream have been analyzed in two layers (inner wall and outer wake) theory at large Reynolds number. The present work is based on open Reynolds equations of momentum and heat transfer without any closure model say, like eddy viscosity or mixing length etc. The matching of inner and outer layers has been carried out by Izakson-Millikan-Kolmogorov hypothesis. The matching for velocity and temperature profiles yields the logarithmic laws and power laws in overlap region of inner and outer layers, along with friction factor and heat transfer laws. The uniformly valid solution for velocity, Reynolds shear stress, temperature and thermal Reynolds heat flux have been proposed by introducing the outer wake functions due to momentum and thermal boundary layers. The comparison with experimental data for velocity profile, temperature profile, skin friction and heat transfer are presented. In outer non-linear layers, the lowest order momentum and thermal boundary layer equations have also been analyses by using eddy viscosity closure model, and results are compared with experimental data. Retired Professor, Embassy Hotel, Rasal Ganj, Aligarh 202001 India.
NASA Astrophysics Data System (ADS)
Golubev, Vladimir S.; Banishev, Alexander F.; Azharonok, V. V.; Zabelin, Alexandre M.
1994-09-01
A qualitative analysis of the role of some hydrodynamic flows and instabilities by the process of laser beam-metal sample deep penetration interaction is presented. The forces of vapor pressure, melt surface tension and thermocapillary forces can determined a number of oscillatory and nonstationary phenomena in keyhole and weld pool. Dynamics of keyhole formation in metal plates has been studied under laser beam pulse effect ((lambda) equals 1.06 micrometers ). Velocities of the keyhole bottom motion have been determined at 0.5 X 105 - 106 W/cm2 laser power densities. Oscillatory regime of plate break- down has been found out. Small-dimensional structures with d-(lambda) period was found on the frozen cavity walls, which, in our opinion, can contribute significantly to laser beam absorption. A new form of periodic structure on the frozen pattern being a helix-shaped modulation of the keyhole walls and bottom relief has been revealed. Temperature oscillations related to capillary oscillations in the melt layer were discovered in the cavity. Interaction of the CW CO2 laser beam and the matter by beam penetration into a moving metal sample has been studied. The pulsed and thermodynamic parameters of the surface plasma were investigated by optical and spectroscopic methods. The frequencies of plasma jets pulsations (in 10 - 105 Hz range) are related to possible melt surface instabilities of the keyhole.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
NASA Astrophysics Data System (ADS)
Amin, Abdullah Al; Baig, Tanvir; Deissler, Robert J.; Yao, Zhen; Tomsic, Michael; Doll, David; Akkus, Ozan; Martens, Michael
2016-05-01
High temperature superconductors such as MgB2 focus on conduction cooling of electromagnets that eliminates the use of liquid helium. With the recent advances in the strain sustainability of MgB2, a full body 1.5 T conduction cooled magnetic resonance imaging (MRI) magnet shows promise. In this article, a 36 filament MgB2 superconducting wire is considered for a 1.5 T full-body MRI system and is analyzed in terms of strain development. In order to facilitate analysis, this composite wire is homogenized and the orthotropic wire material properties are employed to solve for strain development using a 2D-axisymmetric finite element analysis (FEA) model of the entire set of MRI magnet. The entire multiscale multiphysics analysis is considered from the wire to the magnet bundles addressing winding, cooling and electromagnetic excitation. The FEA solution is verified with proven analytical equations and acceptable agreement is reported. The results show a maximum mechanical strain development of 0.06% that is within the failure criteria of -0.6% to 0.4% (-0.3% to 0.2% for design) for the 36 filament MgB2 wire. Therefore, the study indicates the safe operation of the conduction cooled MgB2 based MRI magnet as far as strain development is concerned.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Does a pneumotach accurately characterize voice function?
NASA Astrophysics Data System (ADS)
Walters, Gage; Krane, Michael
2016-11-01
A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.
Accurate thermoplasmonic simulation of metallic nanoparticles
NASA Astrophysics Data System (ADS)
Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing
2017-01-01
Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.
Accurate method for computing correlated color temperature.
Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier
2016-06-27
For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 10^{6} K.
Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.
Ganyecz, Ádám; Kállay, Mihály; Csontos, József
2017-02-09
An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Accurate methods for large molecular systems.
Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A
2009-07-23
Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.
Accurate equilibrium structures for piperidine and cyclohexane.
Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter
2015-03-05
Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Accurate upper body rehabilitation system using kinect.
Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit
2016-08-01
The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.
Noninvasive hemoglobin monitoring: how accurate is enough?
Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E
2013-10-01
Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.
Accurate, reproducible measurement of blood pressure.
Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W
1990-01-01
The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Accurate Fission Data for Nuclear Safety
NASA Astrophysics Data System (ADS)
Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.
2014-05-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.
Accurate orbit propagation with planetary close encounters
NASA Astrophysics Data System (ADS)
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Accurate glucose detection in a small etalon
NASA Astrophysics Data System (ADS)
Martini, Joerg; Kuebler, Sebastian; Recht, Michael; Torres, Francisco; Roe, Jeffrey; Kiesel, Peter; Bruce, Richard
2010-02-01
We are developing a continuous glucose monitor for subcutaneous long-term implantation. This detector contains a double chamber Fabry-Perot-etalon that measures the differential refractive index (RI) between a reference and a measurement chamber at 850 nm. The etalon chambers have wavelength dependent transmission maxima which dependent linearly on the RI of their contents. An RI difference of ▵n=1.5.10-6 changes the spectral position of a transmission maximum by 1pm in our measurement. By sweeping the wavelength of a single-mode Vertical-Cavity-Surface-Emitting-Laser (VCSEL) linearly in time and detecting the maximum transmission peaks of the etalon we are able to measure the RI of a liquid. We have demonstrated accuracy of ▵n=+/-3.5.10-6 over a ▵n-range of 0 to 1.75.10-4 and an accuracy of 2% over a ▵nrange of 1.75.10-4 to 9.8.10-4. The accuracy is primarily limited by the reference measurement. The RI difference between the etalon chambers is made specific to glucose by the competitive, reversible release of Concanavalin A (ConA) from an immobilized dextran matrix. The matrix and ConA bound to it, is positioned outside the optical detection path. ConA is released from the matrix by reacting with glucose and diffuses into the optical path to change the RI in the etalon. Factors such as temperature affect the RI in measurement and detection chamber equally but do not affect the differential measurement. A typical standard deviation in RI is +/-1.4.10-6 over the range 32°C to 42°C. The detector enables an accurate glucose specific concentration measurement.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Astrophysics Data System (ADS)
Wheeler, K.; Knuth, K.; Castle, P.
2005-12-01
and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
NASA Astrophysics Data System (ADS)
Madhavan, Varunkumar
Impacts of porosity variations through porous deposits with chimneys where wick boiling contributes to bulk of heat transfer, on thermal performance, solute concentration levels and particle number density distribution are examined through a two-dimensional model. The multi-physics model is described by a coupled system of a thermal model, a momentum transfer model, a solute concentration transport model and a particle transport and absorption model. Various appropriate numerical methods were developed to solve each of these models. Porosity variations were found to have significant impacts on peaking local solute and particle number concentrations within the deposits. This accentuates neutron flux absorption and hence reduces the flux in local vicinity and furthers axially offset anomalous effects of power generated within the core. The peaking values may also be aggravating corrosion locally over the cladding elements. The particle deposition model developed here gives insight towards how the particles are generally packed around the chimney and how the local porosity evolves and varies. It was found that local porosity within the deposits tends to be low near the chimney walls and is generally increasing while moving away from the walls. Local Nickel Ferrite absorption within the crud is estimated using this model and results obtained from Particle Assembly/Constrained Expansion (PACE) models. The rate of absorption of these particles is hugely affected by the non-uniformity of the porous deposits. The estimated values approach observed values when the porous deposits are treated with locally varying porosities rather than when they're with a generally uniform porosity.
77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...
A predictable and accurate technique with elastomeric impression materials.
Barghi, N; Ontiveros, J C
1999-08-01
A method for obtaining more predictable and accurate final impressions with polyvinylsiloxane impression materials in conjunction with stock trays is proposed and tested. Heavy impression material is used in advance for construction of a modified custom tray, while extra-light material is used for obtaining a more accurate final impression.
Tube dimpling tool assures accurate dip-brazed joints
NASA Technical Reports Server (NTRS)
Beuyukian, C. S.; Heisman, R. M.
1968-01-01
Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.
Analysis of a Cylindrical Specimen Heated by an Impinging Hot Hydrogen Jet
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Luong, Van; Foote, John; Litchford, Ron; Chen, Yen-Sen
2006-01-01
A computational conjugate heat transfer methodology was developed, as a first step towards an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber and components. A solid conduction heat transfer procedure was implemented onto a pressure-based, multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, and unstructured grid computational fluid dynamics formulation. The conjugate heat transfer of a cylindrical material specimen heated by an impinging hot hydrogen jet inside an enclosed test fixture was simulated and analyzed. The solid conduction heat transfer procedure was anchored with a standard solid heat transfer code. Transient analyses were then performed with ,variable thermal conductivities representing three composites of a material utilized as flow element in a legacy engine test. It was found that material thermal conductivity strongly influences the transient heat conduction characteristics. In addition, it was observed that high thermal gradient occur inside the cylindrical specimen during an impulsive or a 10 s ramp start sequence, but not during steady-state operations.
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Sibille, Laurent
2010-01-01
The technology of direct electrolysis of molten lunar regolith to produce oxygen and molten metal alloys has progressed greatly in the last few years. The development of long-lasting inert anodes and cathode designs as well as techniques for the removal of molten products from the reactor has been demonstrated. The containment of chemically aggressive oxide and metal melts is very difficult at the operating temperatures ca 1600 C. Containing the molten oxides in a regolith shell can solve this technical issue and can be achieved by designing a self-heating reactor in which the electrolytic currents generate enough Joule heat to create a molten bath. In a first phase, a thermal analysis model was built to study the formation of a melt of lunar basaltic regolith irradiated by a focused solar beam This mode of heating was selected because it relies on radiative heat transfer, which is the dominant mode of transfer of energy in melts at 1600 C. Knowing and setting the Gaussian-type heat flux from the concentrated solar beam and the phase and temperature dependent thermal properties, the model predicts the dimensions and temperature profile of the melt. A validation of the model is presented in this paper through the experimental formation of a spherical cap melt realized by others. The Orbitec/PSI experimental setup uses an 3.6-cm diameter concentrated solar beam to create a hemispheric melt in a bed of lunar regolith simulant contained in a large pot. Upon cooling, the dimensions of the vitrified melt are measured to validate the thermal model. In a second phase, the model is augmented by multiphysics components to compute the passage of electrical currents between electrodes inserted in the molten regolith. The current through the melt generates Joule heating due to the high resistivity of the medium and this energy is transferred into the melt by conduction, convection and primarily by radiation. The model faces challenges in two major areas, the change of phase as
Problems in publishing accurate color in IEEE journals.
Vrhel, Michael J; Trussell, H J
2002-01-01
To demonstrate the performance of color image processing algorithms, it is desirable to be able to accurately display color images in archival publications. In poster presentations, the authors have substantial control of the printing process, although little control of the illumination. For journal publication, the authors must rely on professional intermediaries (printers) to accurately reproduce their results. Our previous work describes requirements for accurately rendering images using your own equipment. This paper discusses the problems of dealing with intermediaries and offers suggestions for improved communication and rendering.
Fabricating an Accurate Implant Master Cast: A Technique Report.
Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F
2015-12-01
The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.
Controlling Hay Fever Symptoms with Accurate Pollen Counts
... Library ▸ Hay fever and pollen counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has ... Pongdee, MD, FAAAAI Seasonal allergic rhinitis known as hay fever is caused by pollen carried in the air ...
Digital system accurately controls velocity of electromechanical drive
NASA Technical Reports Server (NTRS)
Nichols, G. B.
1965-01-01
Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.
Accurate tracking of high dynamic vehicles with translated GPS
NASA Astrophysics Data System (ADS)
Blankshain, Kenneth M.
The GPS concept and the translator processing system (TPS) which were developed for accurate and cost-effective tracking of various types of high dynamic expendable vehicles are described. A technique used by the translator processing system (TPS) to accomplish very accurate high dynamic tracking is presented. Automatic frequency control and fast Fourier transform processes are combined to track 100 g acceleration and 100 g/s jerk with 1-sigma velocity measurement error less than 1 ft/sec.
Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2011-03-23
A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.
Accurately measuring dynamic coefficient of friction in ultraform finishing
NASA Astrophysics Data System (ADS)
Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.
2013-09-01
UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.
Nonexposure Accurate Location K-Anonymity Algorithm in LBS
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060
Nonexposure accurate location K-anonymity algorithm in LBS.
Jia, Jinying; Zhang, Fengli
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.
Memory conformity affects inaccurate memories more than accurate memories.
Wright, Daniel B; Villalba, Daniella K
2012-01-01
After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.
Accurate Fiber Length Measurement Using Time-of-Flight Technique
NASA Astrophysics Data System (ADS)
Terra, Osama; Hussein, Hatem
2016-06-01
Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.
Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian
2015-09-01
Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
Must Kohn-Sham oscillator strengths be accurate at threshold?
Yang Zenghui; Burke, Kieron; Faassen, Meta van
2009-09-21
The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.
Accurate torque-speed performance prediction for brushless dc motors
NASA Astrophysics Data System (ADS)
Gipper, Patrick D.
Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
In-line sensor for accurate rf power measurements
NASA Astrophysics Data System (ADS)
Gahan, D.; Hopkins, M. B.
2005-10-01
An in-line sensor has been constructed with 50Ω characteristic impedance to accurately measure rf power dissipated in a matched or unmatched load with a view to being implemented as a rf discharge diagnostic. The physical construction and calibration technique are presented. The design is a wide band, hybrid directional coupler/current-voltage sensor suitable for fundamental and harmonic power measurements. A comparison with a standard wattmeter using dummy load impedances shows that this in-line sensor is significantly more accurate in mismatched conditions.
In-line sensor for accurate rf power measurements
Gahan, D.; Hopkins, M.B.
2005-10-15
An in-line sensor has been constructed with 50 {omega} characteristic impedance to accurately measure rf power dissipated in a matched or unmatched load with a view to being implemented as a rf discharge diagnostic. The physical construction and calibration technique are presented. The design is a wide band, hybrid directional coupler/current-voltage sensor suitable for fundamental and harmonic power measurements. A comparison with a standard wattmeter using dummy load impedances shows that this in-line sensor is significantly more accurate in mismatched conditions.
Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air
NASA Technical Reports Server (NTRS)
Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.
2007-01-01
The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...
Device accurately measures and records low gas-flow rates
NASA Technical Reports Server (NTRS)
Branum, L. W.
1966-01-01
Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.
Ultrasonic system for accurate distance measurement in the air.
Licznerski, Tomasz J; Jaroński, Jarosław; Kosz, Dariusz
2011-12-01
This paper presents a system that accurately measures the distance travelled by ultrasound waves through the air. The simple design of the system and its obtained accuracy provide a tool for non-contact distance measurements required in the laser's optical system that investigates the surface of the eyeball.
A Self-Instructional Device for Conditioning Accurate Prosody.
ERIC Educational Resources Information Center
Buiten, Roger; Lane, Harlan
1965-01-01
A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…
Monitoring circuit accurately measures movement of solenoid valve
NASA Technical Reports Server (NTRS)
Gillett, J. D.
1966-01-01
Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.
Instrument accurately measures small temperature changes on test surface
NASA Technical Reports Server (NTRS)
Harvey, W. D.; Miller, H. B.
1966-01-01
Calorimeter apparatus accurately measures very small temperature rises on a test surface subjected to aerodynamic heating. A continuous thin sheet of a sensing material is attached to a base support plate through which a series of holes of known diameter have been drilled for attaching thermocouples to the material.
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
Ellipsoidal-mirror reflectometer accurately measures infrared reflectance of materials
NASA Technical Reports Server (NTRS)
Dunn, S. T.; Richmond, J. C.
1967-01-01
Reflectometer accurately measures the reflectance of specimens in the infrared beyond 2.5 microns and under geometric conditions approximating normal irradiation and hemispherical viewing. It includes an ellipsoidal mirror, a specially coated averaging sphere associated with a detector for minimizing spatial and angular sensitivity, and an incident flux chopper.
Second-order accurate nonoscillatory schemes for scalar conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1989-01-01
Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.
Foresight begins with FMEA. Delivering accurate risk assessments.
Passey, R D
1999-03-01
If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.
How Accurate Are Judgments of Intelligence by Strangers?
ERIC Educational Resources Information Center
Borkenau, Peter
Whether judgments made by complete strangers as to the intelligence of subjects are accurate or merely illusory was studied in Germany. Target subjects were 50 female and 50 male adults recruited through a newspaper article. Eighteen judges, who did not know the subjects, were recruited from a university community. Videorecordings of the subjects,…
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.
ERIC Educational Resources Information Center
Gerstel, Sanford M.
1986-01-01
An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
ERIC Educational Resources Information Center
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
Ginting, Victor
2014-03-15
it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.
DNA barcode data accurately assign higher spider taxa.
Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž
2016-01-01
The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the
DNA barcode data accurately assign higher spider taxa
Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina
2016-01-01
The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
Accurate adjoint design sensitivities for nano metal optics.
Hansen, Paul; Hesselink, Lambertus
2015-09-07
We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics.
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
Multimodal spatial calibration for accurately registering EEG sensor positions.
Zhang, Jianhua; Chen, Jian; Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.
Accurate measurement of the helical twisting power of chiral dopants
NASA Astrophysics Data System (ADS)
Kosa, Tamas; Bodnar, Volodymyr; Taheri, Bahman; Palffy-Muhoray, Peter
2002-03-01
We propose a method for the accurate determination of the helical twisting power (HTP) of chiral dopants. In the usual Cano-wedge method, the wedge angle is determined from the far-field separation of laser beams reflected from the windows of the test cell. Here we propose to use an optical fiber based spectrometer to accurately measure the cell thickness. Knowing the cell thickness at the positions of the disclination lines allows determination of the HTP. We show that this extension of the Cano-wedge method greatly increases the accuracy with which the HTP is determined. We show the usefulness of this method by determining the HTP of ZLI811 in a variety of hosts with negative dielectric anisotropy.
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
Library preparation for highly accurate population sequencing of RNA viruses
Acevedo, Ashley; Andino, Raul
2015-01-01
Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; ...
2015-05-01
With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective Jπ=3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shellmore » nuclei are in reasonable agreement with experiment.« less
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
Uniformly high order accurate essentially non-oscillatory schemes 3
NASA Technical Reports Server (NTRS)
Harten, A.; Engquist, B.; Osher, S.; Chakravarthy, S. R.
1986-01-01
In this paper (a third in a series) the construction and the analysis of essentially non-oscillatory shock capturing methods for the approximation of hyperbolic conservation laws are presented. Also presented is a hierarchy of high order accurate schemes which generalizes Godunov's scheme and its second order accurate MUSCL extension to arbitrary order of accuracy. The design involves an essentially non-oscillatory piecewise polynomial reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell. The reconstruction algorithm is derived from a new interpolation technique that when applied to piecewise smooth data gives high-order accuracy whenever the function is smooth but avoids a Gibbs phenomenon at discontinuities. Unlike standard finite difference methods this procedure uses an adaptive stencil of grid points and consequently the resulting schemes are highly nonlinear.
Groundtruth approach to accurate quantitation of fluorescence microarrays
Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J
2000-12-01
To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.
Accurate determination of the sedimentation flux of concentrated suspensions
NASA Astrophysics Data System (ADS)
Martin, J.; Rakotomalala, N.; Salin, D.
1995-10-01
Flow rate jumps are used to generate propagating concentration variations in a counterflow stabilized suspension (a liquid fluidized bed). An acoustic technique is used to measure accurately the resulting concentration profiles through the bed. Depending on the experimental conditions, we have observed self-sharpening, or/and self-spreading concentration fronts. Our data are analyzed in the framework of Kynch's theory, providing an accurate determination of the sedimentation flux [CU(C); U(C) is the hindered sedimentation velocity of the suspension] and its derivatives in the concentration range 30%-60%. In the vicinity of the packing concentration, controlling the flow rate has allowed us to increase the maximum packing up to 60%.
Accurate nuclear radii and binding energies from a chiral interaction
Ekstrom, Jan A.; Jansen, G. R.; Wendt, Kyle A.; Hagen, Gaute; Papenbrock, Thomas F.; Carlsson, Boris; Forssen, Christian; Hjorth-Jensen, M.; Navratil, Petr; Nazarewicz, Witold
2015-05-01
With the goal of developing predictive ab initio capability for light and medium-mass nuclei, two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLO_{sat}, yield accurate binding energies and radii of nuclei up to ^{40}Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective J^{π}=3^{-} states in ^{16}O and ^{40}Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.
Efficient and accurate computation of the incomplete Airy functions
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
Strategy Guideline. Accurate Heating and Cooling Load Calculations
Burdick, Arlan
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Optical Fiber Geometry: Accurate Measurement of Cladding Diameter
Young, Matt; Hale, Paul D.; Mechels, Steven E.
1993-01-01
We have developed three instruments for accurate measurement of optieal fiber cladding diameter: a contact micrometer, a scanning confocal microscope, and a white-light interference microscope. Each instrument has an estimated uncertainty (3 standard deviations) of 50 nm or less, but the confocal microscope may display a 20 nm systematic error as well. The micrometer is used to generate Standard Reference Materials that are commercially available. PMID:28053467
Accurate Insertion Loss Measurements of the Juno Patch Array Antennas
NASA Technical Reports Server (NTRS)
Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John
2010-01-01
This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.
Note: Fast, small, accurate 90° rotator for a polarizer.
Shelton, David P; O'Donnell, William M; Norton, James L
2011-03-01
A permanent magnet stepper motor is modified to hold a dichroic polarizer inside the motor. Rotation of the polarizer by 90° ± 0.04° is accomplished within 80 ms. This device is used for measurements of the intensity ratio for two orthogonal linear polarized components of a light beam. The two selected polarizations can be rapidly alternated to allow for signal drift compensation, and the two selected polarizations are accurately orthogonal.
A robust and accurate formulation of molecular and colloidal electrostatics.
Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C
2016-08-07
This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.
A robust and accurate formulation of molecular and colloidal electrostatics
NASA Astrophysics Data System (ADS)
Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.
2016-08-01
This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.
An All-Fragments Grammar for Simple and Accurate Parsing
2012-03-21
present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set...in contrast to much recent work on tree selection in data-oriented parsing and tree -substitution grammar learning. We require only simple...which exploits both large tree fragments and sym- bol refinement. We parse with all fragments of the training set, in contrast to much recent work on
Accurate Scientific Visualization in Research and Physics Teaching
NASA Astrophysics Data System (ADS)
Wendler, Tim
2011-10-01
Accurate visualization is key in the expression and comprehension of physical principles. Many 3D animation software packages come with built-in numerical methods for a variety of fundamental classical systems. Scripting languages give access to low-level computational functionality, thereby revealing a virtual physics laboratory for teaching and research. Specific examples will be presented: Galilean relativistic hair, energy conservation in complex systems, scattering from a central force, and energy transfer in bi-molecular reactions.
Strategy Guideline: Accurate Heating and Cooling Load Calculations
Burdick, A.
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Multigrid time-accurate integration of Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
Accurate Method for Determining Adhesion of Cantilever Beams
Michalske, T.A.; de Boer, M.P.
1999-01-08
Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.
Accurate vessel segmentation with constrained B-snake.
Yuanzhi Cheng; Xin Hu; Ji Wang; Yadong Wang; Tamura, Shinichi
2015-08-01
We describe an active contour framework with accurate shape and size constraints on the vessel cross-sectional planes to produce the vessel segmentation. It starts with a multiscale vessel axis tracing in a 3D computed tomography (CT) data, followed by vessel boundary delineation on the cross-sectional planes derived from the extracted axis. The vessel boundary surface is deformed under constrained movements on the cross sections and is voxelized to produce the final vascular segmentation. The novelty of this paper lies in the accurate contour point detection of thin vessels based on the CT scanning model, in the efficient implementation of missing contour points in the problematic regions and in the active contour model with accurate shape and size constraints. The main advantage of our framework is that it avoids disconnected and incomplete segmentation of the vessels in the problematic regions that contain touching vessels (vessels in close proximity to each other), diseased portions (pathologic structure attached to a vessel), and thin vessels. It is particularly suitable for accurate segmentation of thin and low contrast vessels. Our method is evaluated and demonstrated on CT data sets from our partner site, and its results are compared with three related methods. Our method is also tested on two publicly available databases and its results are compared with the recently published method. The applicability of the proposed method to some challenging clinical problems, the segmentation of the vessels in the problematic regions, is demonstrated with good results on both quantitative and qualitative experimentations; our segmentation algorithm can delineate vessel boundaries that have level of variability similar to those obtained manually.
Computational Time-Accurate Body Movement: Methodology, Validation, and Application
1995-10-01
used that had a leading-edge sweep angle of 45 deg and a NACA 64A010 symmetrical airfoil section. A cross section of the pylon is a symmetrical...25 2. Information Flow for the Time-Accurate Store Trajectory Prediction Process . . . . . . . . . 26 3. Pitch Rates for NACA -0012 Airfoil...section are comparisons of the computational results to data for a NACA -0012 airfoil following a predefined pitching motion. Validation of the
Discrete sensors distribution for accurate plantar pressure analyses.
Claverie, Laetitia; Ille, Anne; Moretto, Pierre
2016-12-01
The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
Selecting MODFLOW cell sizes for accurate flow fields.
Haitjema, H; Kelson, V; de Lange, W
2001-01-01
Contaminant transport models often use a velocity field derived from a MODFLOW flow field. Consequently, the accuracy of MODFLOW in representing a ground water flow field determines in part the accuracy of the transport predictions, particularly when advective transport is dominant. We compared MODFLOW ground water flow rates and MODPATH particle traces (advective transport) for a variety of conceptual models and different grid spacings to exact or approximate analytic solutions. All of our numerical experiments concerned flow in a single confined or semiconfined aquifer. While MODFLOW appeared robust in terms of both local and global water balance, we found that ground water flow rates, particle traces, and associated ground water travel times are accurate only when sufficiently small cells are used. For instance, a minimum of four or five cells are required to accurately model total ground water inflow in tributaries or other narrow surface water bodies that end inside the model domain. Also, about 50 cells are needed to represent zones of differing transmissivities or an incorrect flow field and (locally) inaccurate ground water travel times may result. Finally, to adequately represent leakage through aquitards or through the bottom of surface water bodies it was found that the maximum allowable cell dimensions should not exceed a characteristic leakage length lambda, which is defined as the square root of the aquifer transmissivity times the resistance of the aquitard or stream bottom. In some cases a cell size of one-tenth of lambda is necessary to obtain accurate results.
Accurately measuring volcanic plume velocity with multiple UV spectrometers
Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.
2006-01-01
A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.
Interacting with image hierarchies for fast and accurate object segmentation
NASA Astrophysics Data System (ADS)
Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark
1994-05-01
Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.
On the Accurate Prediction of CME Arrival At the Earth
NASA Astrophysics Data System (ADS)
Zhang, Jie; Hess, Phillip
2016-07-01
We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Accurate thermoelastic tensor and acoustic velocities of NaCl
NASA Astrophysics Data System (ADS)
Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.
2015-12-01
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Dynamical correction of control laws for marine ships' accurate steering
NASA Astrophysics Data System (ADS)
Veremey, Evgeny I.
2014-06-01
The objective of this work is the analytical synthesis problem for marine vehicles autopilots design. Despite numerous known methods for a solution, the mentioned problem is very complicated due to the presence of an extensive population of certain dynamical conditions, requirements and restrictions, which must be satisfied by the appropriate choice of a steering control law. The aim of this paper is to simplify the procedure of the synthesis, providing accurate steering with desirable dynamics of the control system. The approach proposed here is based on the usage of a special unified multipurpose control law structure that allows decoupling a synthesis into simpler particular optimization problems. In particular, this structure includes a dynamical corrector to support the desirable features for the vehicle's motion under the action of sea wave disturbances. As a result, a specialized new method for the corrector design is proposed to provide an accurate steering or a trade-off between accurate steering and economical steering of the ship. This method guaranties a certain flexibility of the control law with respect to an actual environment of the sailing; its corresponding turning can be realized in real time onboard.
[Spectroscopy technique and ruminant methane emissions accurate inspecting].
Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun
2009-03-01
The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.
Accurate and simple calibration of DLP projector systems
NASA Astrophysics Data System (ADS)
Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus
2014-03-01
Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.
Accurate genome relative abundance estimation based on shotgun metagenomic reads.
Xia, Li C; Cram, Jacob A; Chen, Ting; Fuhrman, Jed A; Sun, Fengzhu
2011-01-01
Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy) by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets) in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based) even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
An accurate metric for the spacetime around rotating neutron stars.
NASA Astrophysics Data System (ADS)
Pappas, George
2017-01-01
The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parameterisation of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a 3-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.
D-BRAIN: Anatomically Accurate Simulated Diffusion MRI Brain Data.
Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried
2016-01-01
Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS.
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.
Development of accurate force fields for the simulation of biomineralization.
Raiteri, Paolo; Demichelis, Raffaella; Gale, Julian D
2013-01-01
The existence of an accurate force field (FF) model that reproduces the free-energy landscape is a key prerequisite for the simulation of biomineralization. Here, the stages in the development of such a model are discussed including the quality of the water model, the thermodynamics of polymorphism, and the free energies of solvation for the relevant species. The reliability of FFs can then be benchmarked against quantities such as the free energy of ion pairing in solution, the solubility product, and the structure of the mineral-water interface.
Using Scaling for accurate stochastic macroweather forecasts (including the "pause")
NASA Astrophysics Data System (ADS)
Lovejoy, Shaun; del Rio Amador, Lenin
2015-04-01
At scales corresponding to the lifetimes of structures of planetary extent (about 5 - 10 days), atmospheric processes undergo a drastic "dimensional transition" from high frequency weather to lower frequency macroweather processes. While conventional GCM's generally well reproduce both the transition and the corresponding (scaling) statistics, due to their sensitive dependence on initial conditions, the role of the weather scale processes is to provide random perturbations to the macroweather processes. The main problem with GCM's is thus that their long term (control run, unforced) statistics converge to the GCM climate and this is somewhat different from the real climate. This is the motivation for using a stochastic model and exploiting the empirical scaling properties and past data to make a stochastic model. It turns out that macroweather intermittency is typically low (the multifractal corrections are small) so that they can be approximated by fractional Gaussian Noise (fGN) processes whose memory can be enormous. For example for annual forecasts, and using the observed global temperature exponent, even 50 years of global temperature data would only allow us to exploit 90% of the available memory (for ocean regions, the figure increases to 600 years). The only complication is that anthropogenic effects dominate the global statistics at time scales beyond about 20 years. However, these are easy to remove using the CO2 forcing as a linear surrogate for all the anthropogenic effects. Using this theoretical framework, we show how to make accurate stochastic macroweather forecasts. We illustrate this on monthly and annual scale series of global and northern hemisphere surface temperatures (including nearly perfect hindcasts of the "pause" in the warming since 1998). We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow. These scaling hindcasts - using a single effective climate sensitivity and single scaling exponent are
Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.
Robinson, David
2014-12-09
A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.
Pink-Beam, Highly-Accurate Compact Water Cooled Slits
Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet
2007-01-19
Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented.
Mapping methods for computationally efficient and accurate structural reliability
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1992-01-01
Mapping methods are developed to improve the accuracy and efficiency of probabilistic structural analyses with coarse finite element meshes. The mapping methods consist of: (1) deterministic structural analyses with fine (convergent) finite element meshes, (2) probabilistic structural analyses with coarse finite element meshes, (3) the relationship between the probabilistic structural responses from the coarse and fine finite element meshes, and (4) a probabilistic mapping. The results show that the scatter of the probabilistic structural responses and structural reliability can be accurately predicted using a coarse finite element model with proper mapping methods. Therefore, large structures can be analyzed probabilistically using finite element methods.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Beam Profile Monitor With Accurate Horizontal And Vertical Beam Profiles
Havener, Charles C [Knoxville, TN; Al-Rejoub, Riad [Oak Ridge, TN
2005-12-26
A widely used scanner device that rotates a single helically shaped wire probe in and out of a particle beam at different beamline positions to give a pair of mutually perpendicular beam profiles is modified by the addition of a second wire probe. As a result, a pair of mutually perpendicular beam profiles is obtained at a first beamline position, and a second pair of mutually perpendicular beam profiles is obtained at a second beamline position. The simple modification not only provides more accurate beam profiles, but also provides a measurement of the beam divergence and quality in a single compact device.
Accurate documentation, correct coding, and compliance: it's your best defense!
Coles, T S; Babb, E F
1999-07-01
This article focuses on the need for physicians to maintain an awareness of regulatory policy and the law impacting the federal government's medical insurance programs, and to internalize and apply this knowledge in their practices. Basic information concerning selected fraud and abuse statutes and the civil monetary penalties and sanctions for noncompliance is discussed. The application of accurate documentation and correct coding principles, as well as the rationale for implementating an effective compliance plan in order to prevent fraud and abuse and/or minimize disciplinary action from government regulatory agencies, are emphasized.
Accurate energy levels for singly ionized platinum (Pt II)
NASA Technical Reports Server (NTRS)
Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.
1988-01-01
New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Shortis, Mark
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172
Accurate Energy Transaction Allocation using Path Integration and Interpolation
NASA Astrophysics Data System (ADS)
Bhide, Mandar Mohan
This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.
Accurate and fast computation of transmission cross coefficients
NASA Astrophysics Data System (ADS)
Apostol, Štefan; Hurley, Paul; Ionescu, Radu-Cristian
2015-03-01
Precise and fast computation of aerial images are essential. Typical lithographic simulators employ a Köhler illumination system for which aerial imagery is obtained using a large number of Transmission Cross Coefficients (TCCs). These are generally computed by a slow numerical evaluation of a double integral. We review the general framework in which the 2D imagery is solved and then propose a fast and accurate method to obtain the TCCs. We acquire analytical solutions and thus avoid the complexity-accuracy trade-off encountered with numerical integration. Compared to other analytical integration methods, the one presented is faster, more general and more tractable.
Calibration Techniques for Accurate Measurements by Underwater Camera Systems.
Shortis, Mark
2015-12-07
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.
NASA Astrophysics Data System (ADS)
Saha, Sujoy Kumar; Dayanidhi, G. L.
2012-12-01
The experimental friction factor and Nusselt number data for laminar flow through a circular duct having integral helical corrugation and fitted with centre-cleared twisted-tape has been presented. Predictive friction factor and Nusselt number correlations have also been presented. The thermohydraulic performance has been evaluated. The major findings of this experimental investigation are that the centre-cleared twisted tapes in combination with integral helical corrugation perform better than the individual enhancement technique acting alone for laminar flow through a circular duct up to a certain amount of twisted-tape centre-clearance.
NASA Technical Reports Server (NTRS)
Kihm, K. D.; Allen, J. S.; Hallinan, K. P.; Pratt, D. M.
2004-01-01
In order to enhance the fundamental understanding of thin film evaporation and thereby improve the critical design concept for two-phase heat transfer devices, microscale heat and mass transport is to be investigated for the transition film region using state-of-the-art optical diagnostic techniques. By utilizing a microgravity environment, the length scales of the transition film region can be extended sufficiently, from submicron to micron, to probe and measure the microscale transport fields which are affected by intermolecular forces. Extension of the thin film dimensions under microgravity will be achieved by using a conical evaporator made of a thin silicon substrate under which concentric and individually controlled micro-heaters are vapor-deposited to maintain either a constant surface temperature or a controlled temperature variation. Local heat transfer rates, required to maintain the desired wall temperature boundary condition, will be measured and recorded by the concentric thermoresistance heaters controlled by a Wheatstone bridge circuit, The proposed experiment employs a novel technique to maintain a constant liquid volume and liquid pressure in the capillary region of the evaporating meniscus so as to maintain quasi-stationary conditions during measurements on the transition film region. Alternating use of Fizeau interferometry via white and monochromatic light sources will measure the thin film slope and thickness variation, respectively. Molecular Fluorescence Tracking Velocimetry (MFTV), utilizing caged fluorophores of approximately 10-nm in size as seeding particles, will be used to measure the velocity profiles in the thin film region. An optical sectioning technique using confocal microscopy will allow submicron depthwise resolution for the velocity measurements within the film for thicknesses on the order of a few microns. Digital analysis of the fluorescence image-displacement PDFs, as described in the main proposal, can further enhance the depthwise resolution.
Tome, Carlos N; Caro, J A; Lebensohn, R A; Unal, Cetin; Arsenlis, A; Marian, J; Pasamehmetoglu, K
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.
Quality metric for accurate overlay control in <20nm nodes
NASA Astrophysics Data System (ADS)
Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki
2013-04-01
The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.
Ultra-accurate collaborative information filtering via directed user similarity
NASA Astrophysics Data System (ADS)
Guo, Q.; Song, W.-J.; Liu, J.-G.
2014-07-01
A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations
2015-01-01
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules. PMID:26146493
Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach
NASA Astrophysics Data System (ADS)
Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina
2014-06-01
The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C_{7}H_{10}O_{2}, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.
Accurate colon residue detection algorithm with partial volume segmentation
NASA Astrophysics Data System (ADS)
Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.
2004-05-01
Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.
Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations.
Dral, Pavlo O; von Lilienfeld, O Anatole; Thiel, Walter
2015-05-12
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.
A Highly Accurate Face Recognition System Using Filtering Correlation
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko
2007-09-01
The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.
Accurate Runout Measurement for HDD Spinning Motors and Disks
NASA Astrophysics Data System (ADS)
Jiang, Quan; Bi, Chao; Lin, Song
As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.
Strategy for accurate liver intervention by an optical tracking system
Lin, Qinyong; Yang, Rongqian; Cai, Ken; Guan, Peifeng; Xiao, Weihu; Wu, Xiaoming
2015-01-01
Image-guided navigation for radiofrequency ablation of liver tumors requires the accurate guidance of needle insertion into a tumor target. The main challenge of image-guided navigation for radiofrequency ablation of liver tumors is the occurrence of liver deformations caused by respiratory motion. This study reports a strategy of real-time automatic registration to track custom fiducial markers glued onto the surface of a patient’s abdomen to find the respiratory phase, in which the static preoperative CT is performed. Custom fiducial markers are designed. Real-time automatic registration method consists of the automatic localization of custom fiducial markers in the patient and image spaces. The fiducial registration error is calculated in real time and indicates if the current respiratory phase corresponds to the phase of the static preoperative CT. To demonstrate the feasibility of the proposed strategy, a liver simulator is constructed and two volunteers are involved in the preliminary experiments. An ex-vivo porcine liver model is employed to further verify the strategy for liver intervention. Experimental results demonstrate that real-time automatic registration method is rapid, accurate, and feasible for capturing the respiratory phase from which the static preoperative CT anatomical model is generated by tracking the movement of the skin-adhered custom fiducial markers. PMID:26417501
Accurate 3D quantification of the bronchial parameters in MDCT
NASA Astrophysics Data System (ADS)
Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.
2005-08-01
The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.
A new and accurate continuum description of moving fronts
NASA Astrophysics Data System (ADS)
Johnston, S. T.; Baker, R. E.; Simpson, M. J.
2017-03-01
Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.
Accurate measurement of streamwise vortices using dual-plane PIV
NASA Astrophysics Data System (ADS)
Waldman, Rye M.; Breuer, Kenneth S.
2012-11-01
Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.
Accurate perception of negative emotions predicts functional capacity in schizophrenia.
Abram, Samantha V; Karpouzian, Tatiana M; Reilly, James L; Derntl, Birgit; Habel, Ute; Smith, Matthew J
2014-04-30
Several studies suggest facial affect perception (FAP) deficits in schizophrenia are linked to poorer social functioning. However, whether reduced functioning is associated with inaccurate perception of specific emotional valence or a global FAP impairment remains unclear. The present study examined whether impairment in the perception of specific emotional valences (positive, negative) and neutrality were uniquely associated with social functioning, using a multimodal social functioning battery. A sample of 59 individuals with schizophrenia and 41 controls completed a computerized FAP task, and measures of functional capacity, social competence, and social attainment. Participants also underwent neuropsychological testing and symptom assessment. Regression analyses revealed that only accurately perceiving negative emotions explained significant variance (7.9%) in functional capacity after accounting for neurocognitive function and symptoms. Partial correlations indicated that accurately perceiving anger, in particular, was positively correlated with functional capacity. FAP for positive, negative, or neutral emotions were not related to social competence or social attainment. Our findings were consistent with prior literature suggesting negative emotions are related to functional capacity in schizophrenia. Furthermore, the observed relationship between perceiving anger and performance of everyday living skills is novel and warrants further exploration.
RTbox: a device for highly accurate response time measurements.
Li, Xiangrui; Liang, Zhen; Kleiner, Mario; Lu, Zhong-Lin
2010-02-01
Although computer keyboards and mice are frequently used in measuring response times (RTs), the accuracy of these measurements is quite low. Specialized RT collection devices must be used to obtain more accurate measurements. However, all the existing devices have some shortcomings. We have developed and implemented a new, commercially available device, the RTbox, for highly accurate RT measurements. The RTbox has its own microprocessor and high-resolution clock. It can record the identities and timing of button events with high accuracy, unaffected by potential timing uncertainty or biases during data transmission and processing in the host computer. It stores button events until the host computer chooses to retrieve them. The asynchronous storage greatly simplifies the design of user programs. The RTbox can also receive and record external signals as triggers and can measure RTs with respect to external events. The internal clock of the RTbox can be synchronized with the computer clock, so the device can be used without external triggers. A simple USB connection is sufficient to integrate the RTbox with any standard computer and operating system.
Novel dispersion tolerant interferometry method for accurate measurements of displacement
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.
2015-05-01
We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.
CT Scan Method Accurately Assesses Humeral Head Retroversion
Boileau, P.; Mazzoleni, N.; Walch, G.; Urien, J. P.
2008-01-01
Humeral head retroversion is not well described with the literature controversial regarding accuracy of measurement methods and ranges of normal values. We therefore determined normal humeral head retroversion and assessed the measurement methods. We measured retroversion in 65 cadaveric humeri, including 52 paired specimens, using four methods: radiographic, computed tomography (CT) scan, computer-assisted, and direct methods. We also assessed the distance between the humeral head central axis and the bicipital groove. CT scan methods accurately measure humeral head retroversion, while radiographic methods do not. The retroversion with respect to the transepicondylar axis was 17.9° and 21.5° with respect to the trochlear tangent axis. The difference between the right and left humeri was 8.9°. The distance between the central axis of the humeral head and the bicipital groove was 7.0 mm and was consistent between right and left humeri. Humeral head retroversion may be most accurately obtained using the patient’s own anatomic landmarks or, if not, identifiable retroversion as measured by those landmarks on contralateral side or the bicipital groove. PMID:18264854
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
Accurate and precise zinc isotope ratio measurements in urban aerosols.
Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly
2008-12-15
We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).
An Accurate and Efficient Method of Computing Differential Seismograms
NASA Astrophysics Data System (ADS)
Hu, S.; Zhu, L.
2013-12-01
Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.
Accurate optical CD profiler based on specialized finite element method
NASA Astrophysics Data System (ADS)
Carrero, Jesus; Perçin, Gökhan
2012-03-01
As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.
Accurate vessel width measurement from fundus photographs: a new concept.
Rassam, S M; Patel, V; Brinchmann-Hansen, O; Engvold, O; Kohner, E M
1994-01-01
Accurate determination of retinal vessel width measurement is important in the study of the haemodynamic changes that accompany various physiological and pathological states. Currently the width at the half height of the transmittance and densitometry profiles are used as a measure of retinal vessel width. A consistent phenomenon of two 'kick points' on the slopes of the transmittance and densitometry profiles near the base, has been observed. In this study, mathematical models have been formulated to describe the characteristic curves of the transmittance and the densitometry profiles. They demonstrate the kick points being coincident with the edges of the blood column. The horizontal distance across the kick points would therefore indicate the actual blood column width. To evaluate this hypothesis, blood was infused through two lengths of plastic tubing of known diameters, and photographed. In comparison with the known diameters, the half height underestimated the blood column width by 7.33% and 6.46%, while the kick point method slightly overestimated it by 1.40% and 0.34%. These techniques were applied to monochromatic fundus photographs. In comparison with the kick point method, the half height underestimated the blood column width in veins by 16.67% and in arteries by 15.86%. The characteristics of the kick points and their practicality have been discussed. The kick point method may provide the most accurate measurement of vessel width possible from these profiles. Images PMID:8110693
A more accurate nonequilibrium air radiation code - NEQAIR second generation
NASA Technical Reports Server (NTRS)
Moreau, Stephane; Laux, Christophe O.; Chapman, Dean R.; Maccormack, Robert W.
1992-01-01
Two experiments, one an equilibrium flow in a plasma torch at Stanford, the other a nonequilibrium flow in a SDIO/IST Bow-Shock-Ultra-Violet missile flight, have provided the basis for modifying, enhancing, and testing the well-known radiation code, NEQAIR. The original code, herein termed NEQAIR1, lacked computational efficiency, accurate data for some species and the flexibility to handle a variety of species. The modified code, herein termed NEQAIR2, incorporates recent findings in the spectroscopic and radiation models. It can handle any number of species and radiative bands in a gas whose thermodynamic state can be described by up to four temperatures. It provides a new capability of computing very fine spectra in a reasonable CPU time, while including transport phenomena along the line of sight and the characteristics of instruments that were used in the measurements. Such a new tool should allow more accurate testing and diagnosis of the different physical models used in numerical simulations of radiating, low density, high energy flows.
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
Accurate three-dimensional documentation of distinct sites
NASA Astrophysics Data System (ADS)
Singh, Mahesh K.; Dutta, Ashish; Subramanian, Venkatesh K.
2017-01-01
One of the most critical aspects of documenting distinct sites is acquiring detailed and accurate range information. Several three-dimensional (3-D) acquisition techniques are available, but each has its own limitations. This paper presents a range data fusion method with the aim to enhance the descriptive contents of the entire 3-D reconstructed model. A kernel function is introduced for supervised classification of the range data using a kernelized support vector machine. The classification method is based on the local saliency features of the acquired range data. The range data acquired from heterogeneous range sensors are transformed into a defined common reference frame. Based on the segmentation criterion, the fusion of range data is performed by integrating finer regions of range data acquired from a laser range scanner with the coarser region of Kinect's range data. After fusion, the Delaunay triangulation algorithm is applied to generate the highly accurate, realistic 3-D model of the scene. Finally, experimental results show the robustness of the proposed approach.
Accurate phylogenetic classification of DNA fragments based onsequence composition
McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore
2006-05-01
Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients.
Accurate stone analysis: the impact on disease diagnosis and treatment.
Mandel, Neil S; Mandel, Ian C; Kolbach-Mandel, Ann M
2017-02-01
This manuscript reviews the requirements for acceptable compositional analysis of kidney stones using various biophysical methods. High-resolution X-ray powder diffraction crystallography and Fourier transform infrared spectroscopy (FTIR) are the only acceptable methods in our labs for kidney stone analysis. The use of well-constructed spectral reference libraries is the basis for accurate and complete stone analysis. The literature included in this manuscript identify errors in most commercial laboratories and in some academic centers. We provide personal comments on why such errors are occurring at such high rates, and although the work load is rather large, it is very worthwhile in providing accurate stone compositions. We also provide the results of our almost 90,000 stone analyses and a breakdown of the number of components we have observed in the various stones. We also offer advice on determining the method used by the various FTIR equipment manufacturers who also provide a stone analysis library so that the FTIR users can feel comfortable in the accuracy of their reported results. Such an analysis on the accuracy of the individual reference libraries could positively influence the reduction in their respective error rates.
Accurate phylogenetic classification of variable-length DNA fragments.
McHardy, Alice Carolyn; Martín, Héctor García; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore
2007-01-01
Metagenome studies have retrieved vast amounts of sequence data from a variety of environments leading to new discoveries and insights into the uncultured microbial world. Except for very simple communities, the encountered diversity has made fragment assembly and the subsequent analysis a challenging problem. A taxonomic characterization of metagenomic fragments is required for a deeper understanding of shotgun-sequenced microbial communities, but success has mostly been limited to sequences containing phylogenetic marker genes. Here we present PhyloPythia, a composition-based classifier that combines higher-level generic clades from a set of 340 completed genomes with sample-derived population models. Extensive analyses on synthetic and real metagenome data sets showed that PhyloPythia allows the accurate classification of most sequence fragments across all considered taxonomic ranks, even for unknown organisms. The method requires no more than 100 kb of training sequence for the creation of accurate models of sample-specific populations and can assign fragments >or=1 kb with high specificity.
Accurate determination of membrane dynamics with line-scan FCS.
Ries, Jonas; Chiantia, Salvatore; Schwille, Petra
2009-03-04
Here we present an efficient implementation of line-scan fluorescence correlation spectroscopy (i.e., one-dimensional spatio-temporal image correlation spectroscopy) using a commercial laser scanning microscope, which allows the accurate measurement of diffusion coefficients and concentrations in biological lipid membranes within seconds. Line-scan fluorescence correlation spectroscopy is a calibration-free technique. Therefore, it is insensitive to optical artifacts, saturation, or incorrect positioning of the laser focus. In addition, it is virtually unaffected by photobleaching. Correction schemes for residual inhomogeneities and depletion of fluorophores due to photobleaching extend the applicability of line-scan fluorescence correlation spectroscopy to more demanding systems. This technique enabled us to measure accurate diffusion coefficients and partition coefficients of fluorescent lipids in phase-separating supported bilayers of three commonly used raft-mimicking compositions. Furthermore, we probed the temperature dependence of the diffusion coefficient in several model membranes, and in human embryonic kidney cell membranes not affected by temperature-induced optical aberrations.
A fast and accurate decoder for underwater acoustic telemetry.
Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A
2014-07-01
The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
NASA Astrophysics Data System (ADS)
Mooney, P. A.; Mulligan, F. J.; Broderick, C.
2016-11-01
The diurnal cycle of precipitation is an important and fundamental cycle in Earth's climate system, yet many aspects of this cycle remain poorly understood. As a result climate models have struggled to accurately simulate the timing of the peak and the amplitude of the cycle. This has led to a large number of modelling studies on the diurnal cycle of precipitation which have focussed mainly on the influence of grid spacing and/or convective parameterizations. Results from these investigations have shown that, while grid spacing and convective parameterizations are important factors in the diurnal cycle, it cannot be fully explained by these factors and it must also be subject to other factors. In this study, we use the weather research and forecasting (WRF) model to investigate four of these other factors, namely the land surface model (LSM), microphysics, longwave radiation and planetary boundary layer in the case of the diurnal cycle of precipitation over the British Isles. We also compare their impact with the effect of two different convective schemes. We find that all simulations have two main problems: (1) there is a large bias (too much precipitation) in both summer and winter (+19 and +38 % respectively for the ensemble averages), and (2) WRF summer precipitation is dominated by a diurnal (24-h) component ( 28 % of the mean precipitation) whereas the observations show a predominantly semidiurnal (12-h) component with a much smaller amplitude ( 10 % of mean precipitation). The choice of LSM has a large influence on the simulated diurnal cycle in summer with the remaining physics schemes showing very little effect. The magnitude of the LSM effect in summer is as large as 35 % on average and up to 50 % at the peak of the cycle. While neither of the two LSMs examined here capture the harmonic content of the diurnal cycle of precipitation very well, we find that use of the RUC LSM results in better agreement with the observations compared with Noah.
The KFM, A Homemade Yet Accurate and Dependable Fallout Meter
Kearny, C.H.
2001-11-20
The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify the
Phase rainbow refractometry for accurate droplet variation characterization.
Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard
2016-10-15
We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.
Efficient and Accurate Indoor Localization Using Landmark Graphs
NASA Astrophysics Data System (ADS)
Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.
2016-06-01
Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.
The importance and attainment of accurate absolute radiometric calibration
NASA Technical Reports Server (NTRS)
Slater, P. N.
1984-01-01
The importance of accurate absolute radiometric calibration is discussed by reference to the needs of those wishing to validate or use models describing the interaction of electromagnetic radiation with the atmosphere and earth surface features. The in-flight calibration methods used for the Landsat Thematic Mapper (TM) and the Systeme Probatoire d'Observation de la Terre, Haute Resolution visible (SPOT/HRV) systems are described and their limitations discussed. The questionable stability of in-flight absolute calibration methods suggests the use of a radiative transfer program to predict the apparent radiance, at the entrance pupil of the sensor, of a ground site of measured reflectance imaged through a well characterized atmosphere. The uncertainties of such a method are discussed.
Accurate radio and optical positions for southern radio sources
NASA Technical Reports Server (NTRS)
Harvey, Bruce R.; Jauncey, David L.; White, Graeme L.; Nothnagel, Axel; Nicolson, George D.; Reynolds, John E.; Morabito, David D.; Bartel, Norbert
1992-01-01
Accurate radio positions with a precision of about 0.01 arcsec are reported for eight compact extragalactic radio sources south of -45-deg declination. The radio positions were determined using VLBI at 8.4 GHz on the 9589 km Tidbinbilla (Australia) to Hartebeesthoek (South Africa) baseline. The sources were selected from the Parkes Catalogue to be strong, flat-spectrum radio sources with bright optical QSO counterparts. Optical positions of the QSOs were also measured from the ESO B Sky Survey plates with respect to stars from the Perth 70 Catalogue, to an accuracy of about 0.19 arcsec rms. These radio and optical positions are as precise as any presently available in the far southern sky. A comparison of the radio and optical positions confirms the estimated optical position errors and shows that there is overall agreement at the 0.1-arcsec level between the radio and Perth 70 optical reference frames in the far south.
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
Acquisition of accurate data from intramolecular quenched fluorescence protease assays.
Arachea, Buenafe T; Wiener, Michael C
2017-04-01
The Intramolecular Quenched Fluorescence (IQF) protease assay utilizes peptide substrates containing donor-quencher pairs that flank the scissile bond. Following protease cleavage, the dequenched donor emission of the product is subsequently measured. Inspection of the IQF literature indicates that rigorous treatment of systematic errors in observed fluorescence arising from inner-filter absorbance (IF) and non-specific intermolecular quenching (NSQ) is incompletely performed. As substrate and product concentrations vary during the time-course of enzyme activity, iterative solution of the kinetic rate equations is, generally, required to obtain the proper time-dependent correction to the initial velocity fluorescence data. Here, we demonstrate that, if the IQF assay is performed under conditions where IF and NSQ are approximately constant during the measurement of initial velocity for a given initial substrate concentration, then a simple correction as a function of initial substrate concentration can be derived and utilized to obtain accurate initial velocity data for analysis.
Accurate oscillator strengths for interstellar ultraviolet lines of Cl I
NASA Technical Reports Server (NTRS)
Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.
1993-01-01
Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.
A fast and accurate FPGA based QRS detection system.
Shukla, Ashish; Macchiarulo, Luca
2008-01-01
An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.
Calculation of Accurate Hexagonal Discontinuity Factors for PARCS
Pounders. J., Bandini, B. R. , Xu, Y, and Downar, T. J.
2007-11-01
In this study we derive a methodology for calculating discontinuity factors consistent with the Triangle-based Polynomial Expansion Nodal (TPEN) method implemented in PARCS for hexagonal reactor geometries. The accuracy of coarse-mesh nodal methods is greatly enhanced by permitting flux discontinuities at node boundaries, but the practice of calculating discontinuity factors from infinite-medium (zero-current) single bundle calculations may not be sufficiently accurate for more challenging problems in which there is a large amount of internodal neutron streaming. The authors therefore derive a TPEN-based method for calculating discontinuity factors that are exact with respect to generalized equivalence theory. The method is validated by reproducing the reference solution for a small hexagonal core.
Accurate measure by weight of liquids in industry
Muller, M.R.
1992-12-12
This research's focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
Accurate measure by weight of liquids in industry. Final report
Muller, M.R.
1992-12-12
This research`s focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Direct computation of parameters for accurate polarizable force fields
Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.
2014-11-21
We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.
Accurate reactions open up the way for more cooperative societies
NASA Astrophysics Data System (ADS)
Vukov, Jeromos
2014-09-01
We consider a prisoner's dilemma model where the interaction neighborhood is defined by a square lattice. Players are equipped with basic cognitive abilities such as being able to distinguish their partners, remember their actions, and react to their strategy. By means of their short-term memory, they can remember not only the last action of their partner but the way they reacted to it themselves. This additional accuracy in the memory enables the handling of different interaction patterns in a more appropriate way and this results in a cooperative community with a strikingly high cooperation level for any temptation value. However, the more developed cognitive abilities can only be effective if the copying process of the strategies is accurate enough. The excessive extent of faulty decisions can deal a fatal blow to the possibility of stable cooperative relations.
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Second-Order Accurate Projective Integrators for Multiscale Problems
Lee, S L; Gear, C W
2005-05-27
We introduce new projective versions of second-order accurate Runge-Kutta and Adams-Bashforth methods, and demonstrate their use as outer integrators in solving stiff differential systems. An important outcome is that the new outer integrators, when combined with an inner telescopic projective integrator, can result in fully explicit methods with adaptive outer step size selection and solution accuracy comparable to those obtained by implicit integrators. If the stiff differential equations are not directly available, our formulations and stability analysis are general enough to allow the combined outer-inner projective integrators to be applied to black-box legacy codes or perform a coarse-grained time integration of microscopic systems to evolve macroscopic behavior, for example.
Accurate determination of heteroclinic orbits in chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Li, Jizhou; Tomsovic, Steven
2017-03-01
Accurate calculation of heteroclinic and homoclinic orbits can be of significant importance in some classes of dynamical system problems. Yet for very strongly chaotic systems initial deviations from a true orbit will be magnified by a large exponential rate making direct computational methods fail quickly. In this paper, a method is developed that avoids direct calculation of the orbit by making use of the well-known stability property of the invariant unstable and stable manifolds. Under an area-preserving map, this property assures that any initial deviation from the stable (unstable) manifold collapses onto them under inverse (forward) iterations of the map. Using a set of judiciously chosen auxiliary points on the manifolds, long orbit segments can be calculated using the stable and unstable manifold intersections of the heteroclinic (homoclinic) tangle. Detailed calculations using the example of the kicked rotor are provided along with verification of the relation between action differences and certain areas bounded by the manifolds.
Accurate superimposition of perimetry data onto fundus photographs.
Bek, T; Lund-Andersen, H
1990-02-01
A technique for accurate superimposition of computerized perimetry data onto the corresponding retinal locations seen on fundus photographs was developed. The technique was designed to take into account: 1) that the photographic field of view of the fundus camera varies with ametropia-dependent camera focusing 2) possible distortion by the fundus camera, and 3) that corrective lenses employed during perimetry magnify or minify the visual field. The technique allowed an overlay of perimetry data of the central 60 degrees of the visual field onto fundus photographs with an accuracy of 0.5 degree. The correlation of localized retinal morphology to localized retinal function was therefore limited by the spatial resolution of the computerized perimetry, which was 2.5 degrees in the Dicon AP-2500 perimeter employed for this study. The theoretical assumptions of the technique were confirmed by comparing visual field records to fundus photographs from patients with morphologically well-defined non-functioning lesions in the retina.
Accurate measurement of the pulse wave delay with imaging photoplethysmography
Kamshilin, Alexei A.; Sidorov, Igor S.; Babayan, Laura; Volynsky, Maxim A.; Giniatullin, Rashid; Mamontov, Oleg V.
2016-01-01
Assessment of the cardiovascular parameters using noncontact video-based or imaging photoplethysmography (IPPG) is usually considered as inaccurate because of strong influence of motion artefacts. To optimize this technique we performed a simultaneous recording of electrocardiogram and video frames of the face for 36 healthy volunteers. We found that signal disturbances originate mainly from the stochastically enhanced dichroic notch caused by endogenous cardiovascular mechanisms, with smaller contribution of the motion artefacts. Our properly designed algorithm allowed us to increase accuracy of the pulse-transit-time measurement and visualize propagation of the pulse wave in the facial region. Thus, the accurate measurement of the pulse wave parameters with this technique suggests a sensitive approach to assess local regulation of microcirculation in various physiological and pathological states. PMID:28018731
Accurate Computation of Divided Differences of the Exponential Function,
1983-06-01
differences are not for arbitrary smooth functions f but for well known analytic functions such as exp. sin and cos. Thus we can exploit their properties in...have a bad name in practice. However in a number of applications the functional form of f is known (e.g. exp) and can be exploited to obtain accurate...n do X =s(1) s(1)=d(i) For j=2.....-1 do11=t, (j) z=Y next j next i SS7 . (Shift back and stop.] ,-tt+77. d(i).-e"d(i), s(i-1)’e~ s(i-i) for i=2
Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography
Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.
2008-01-01
Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703
A new accurate pill recognition system using imprint information
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Kamata, Sei-ichiro
2013-12-01
Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.
Method for Accurate Surface Temperature Measurements During Fast Induction Heating
NASA Astrophysics Data System (ADS)
Larregain, Benjamin; Vanderesse, Nicolas; Bridier, Florent; Bocher, Philippe; Arkinson, Patrick
2013-07-01
A robust method is proposed for the measurement of surface temperature fields during induction heating. It is based on the original coupling of temperature-indicating lacquers and a high-speed camera system. Image analysis tools have been implemented to automatically extract the temporal evolution of isotherms. This method was applied to the fast induction treatment of a 4340 steel spur gear, allowing the full history of surface isotherms to be accurately documented for a sequential heating, i.e., a medium frequency preheating followed by a high frequency final heating. Three isotherms, i.e., 704, 816, and 927°C, were acquired every 0.3 ms with a spatial resolution of 0.04 mm per pixel. The information provided by the method is described and discussed. Finally, the transformation temperature Ac1 is linked to the temperature on specific locations of the gear tooth.
Efficient determination of accurate atomic polarizabilities for polarizeable embedding calculations
Schröder, Heiner
2016-01-01
We evaluate embedding potentials, obtained via various methods, used for polarizable embedding computations of excitation energies of para‐nitroaniline in water and organic solvents as well as of the green fluorescent protein. We found that isotropic polarizabilities derived from DFTD3 dispersion coefficients correlate well with those obtained via the LoProp method. We show that these polarizabilities in conjunction with appropriately derived point charges are in good agreement with calculations employing static multipole moments up to quadrupoles and anisotropic polarizabilities for both computed systems. The (partial) use of these easily‐accessible parameters drastically reduces the computational effort to obtain accurate embedding potentials especially for proteins. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27317509
Accurate reactions open up the way for more cooperative societies.
Vukov, Jeromos
2014-09-01
We consider a prisoner's dilemma model where the interaction neighborhood is defined by a square lattice. Players are equipped with basic cognitive abilities such as being able to distinguish their partners, remember their actions, and react to their strategy. By means of their short-term memory, they can remember not only the last action of their partner but the way they reacted to it themselves. This additional accuracy in the memory enables the handling of different interaction patterns in a more appropriate way and this results in a cooperative community with a strikingly high cooperation level for any temptation value. However, the more developed cognitive abilities can only be effective if the copying process of the strategies is accurate enough. The excessive extent of faulty decisions can deal a fatal blow to the possibility of stable cooperative relations.
Robust and Accurate Seismic(acoustic) Ray Tracer
NASA Astrophysics Data System (ADS)
Debski, W.; Ando, M.
Recent development of high resolution seismic tomography as well as a need for a high precision seismic (acoustic) source locations calls for robust and very precise numeri- cal methods of an estimation of seismic (acoustic) travel times and ray paths. Here we present a method based on a parametrisation of the ray path by a series of the Cheby- shev polynomials. This pseudo-spectral method, combined with the accurate Gauss- Lobbato integration procedure allows to reach a very high relative travel time accu- racy t/t 10-7. At the same time use of the Genetic Algorithm based optimizer (Evolutionary Algorithm) assures an extreme robustness which allows the method to be used in complicated 3D geological structures like multi-fault areas, mines, or real engineering applications, constructions, etc.
Fast and accurate automated cell boundary determination for fluorescence microscopy
NASA Astrophysics Data System (ADS)
Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider
2013-07-01
Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.
High order accurate finite difference schemes based on symmetry preservation
NASA Astrophysics Data System (ADS)
Ozbenli, Ersin; Vedula, Prakash
2016-11-01
A new algorithm for development of high order accurate finite difference schemes for numerical solution of partial differential equations using Lie symmetries is presented. Considering applicable symmetry groups (such as those relevant to space/time translations, Galilean transformation, scaling, rotation and projection) of a partial differential equation, invariant numerical schemes are constructed based on the notions of moving frames and modified equations. Several strategies for construction of invariant numerical schemes with a desired order of accuracy are analyzed. Performance of the proposed algorithm is demonstrated using analysis of one-dimensional partial differential equations, such as linear advection diffusion equations inviscid Burgers equation and viscous Burgers equation, as our test cases. Through numerical simulations based on these examples, the expected improvement in accuracy of invariant numerical schemes (up to fourth order) is demonstrated. Advantages due to implementation and enhanced computational efficiency inherent in our proposed algorithm are presented. Extension of the basic framework to multidimensional partial differential equations is also discussed.
An Inexpensive and Accurate Tensiometer Using an Electronic Balance
NASA Astrophysics Data System (ADS)
Dolz, Manuel; Delegido, Jesús; Hernández, María-Jesús; Pellicer, Julio
2001-09-01
A method for measuring surface tension of liquid-air interfaces that consists of a modification of the du Noüy tensiometer is proposed. An electronic balance is used to determine the detachment force with high resolution and the relative displacement ring/plate-liquid surface is carried out by the descent of the liquid-free surface. The procedure familiarizes undergraduate students in applied science and technology with the experimental study of surface tension by means of a simple and accurate method that offers the advantages of sophisticated devices at considerably less cost. The operational aspects that must be taken into account are analyzed: the measuring system and determination of its effective length, measurement of the detachment force, and the relative system-liquid interface displacement rate. To check the accuracy of the proposed tensiometer, measurements of the surface tension of different known liquids have been performed, and good agreement with results reported in the literature was obtained.
Accurate derivative evaluation for any Grad–Shafranov solver
Ricketson, L.F.; Cerfon, A.J.; Rachh, M.; Freidberg, J.P.
2016-01-15
We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.
Accurate bond dissociation energies (D 0) for FHF- isotopologues
NASA Astrophysics Data System (ADS)
Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.
2013-09-01
Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.
Accurate and efficient maximal ball algorithm for pore network extraction
NASA Astrophysics Data System (ADS)
Arand, Frederick; Hesser, Jürgen
2017-04-01
The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.
How accurately can 21cm tomography constrain cosmology?
NASA Astrophysics Data System (ADS)
Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver
2008-07-01
There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
Influence of pansharpening techniques in obtaining accurate vegetation thematic maps
NASA Astrophysics Data System (ADS)
Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier
2016-10-01
In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.
A new approach to compute accurate velocity of meteors
NASA Astrophysics Data System (ADS)
Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William
2016-10-01
The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy
Accurate spectral numerical schemes for kinetic equations with energy diffusion
NASA Astrophysics Data System (ADS)
Wilkening, Jon; Cerfon, Antoine J.; Landreman, Matt
2015-08-01
We examine the merits of using a family of polynomials that are orthogonal with respect to a non-classical weight function to discretize the speed variable in continuum kinetic calculations. We consider a model one-dimensional partial differential equation describing energy diffusion in velocity space due to Fokker-Planck collisions. This relatively simple case allows us to compare the results of the projected dynamics with an expensive but highly accurate spectral transform approach. It also allows us to integrate in time exactly, and to focus entirely on the effectiveness of the discretization of the speed variable. We show that for a fixed number of modes or grid points, the non-classical polynomials can be many orders of magnitude more accurate than classical Hermite polynomials or finite-difference solvers for kinetic equations in plasma physics. We provide a detailed analysis of the difference in behavior and accuracy of the two families of polynomials. For the non-classical polynomials, if the initial condition is not smooth at the origin when interpreted as a three-dimensional radial function, the exact solution leaves the polynomial subspace for a time, but returns (up to roundoff accuracy) to the same point evolved to by the projected dynamics in that time. By contrast, using classical polynomials, the exact solution differs significantly from the projected dynamics solution when it returns to the subspace. We also explore the connection between eigenfunctions of the projected evolution operator and (non-normalizable) eigenfunctions of the full evolution operator, as well as the effect of truncating the computational domain.
Braking of fast and accurate elbow flexions in the monkey.
Flament, D; Hore, J; Vilis, T
1984-01-01
The processes responsible for braking fast and accurate elbow movements were studied in the monkey. The movements studied were made over different amplitudes and against different inertias . All were made to the same end position. Only fast movements that showed the typical biphasic or triphasic pattern of activity in agonists and antagonists were analysed in detail. For movements made over different amplitudes and at different velocities there was symmetry between the acceleration and deceleration phases of the movements. For movements of the same amplitude performed at different velocities there was a direct linear relation between peak velocity and both the peak acceleration (and integrated agonist burst) and peak deceleration (and integrated antagonist burst). The slopes of these relations and their intercept with the peak velocity axis were a function of movement amplitude. This was such that for large and small movements of the same peak velocity and the same end position (i) peak acceleration and phasic agonist activity were larger for the small movements and (ii) peak deceleration and phasic antagonist activity were larger for the small movements. The slope of these relations and the symmetry between acceleration and deceleration were not affected by the addition of an inertial load to the handle held by the monkey. The results indicate that fast and accurate elbow movements in the monkey are braked by antagonist activity that is centrally programmed. As all movements were made to the same end position, the larger antagonist burst in small movements, made at the same peak velocity as large movements, cannot be due to differences in the viscoelastic contribution to braking (cf. Marsden, Obeso & Rothwell , 1983).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6737291
Electron Microprobe Analysis Techniques for Accurate Measurements of Apatite
NASA Astrophysics Data System (ADS)
Goldoff, B. A.; Webster, J. D.; Harlov, D. E.
2010-12-01
Apatite [Ca5(PO4)3(F, Cl, OH)] is a ubiquitous accessory mineral in igneous, metamorphic, and sedimentary rocks. The mineral contains halogens and hydroxyl ions, which can provide important constraints on fugacities of volatile components in fluids and other phases in igneous and metamorphic environments in which apatite has equilibrated. Accurate measurements of these components in apatite are therefore necessary. Analyzing apatite by electron microprobe (EMPA), which is a commonly used geochemical analytical technique, has often been found to be problematic and previous studies have identified sources of error. For example, Stormer et al. (1993) demonstrated that the orientation of an apatite grain relative to the incident electron beam could significantly affect the concentration results. In this study, a variety of alternative EMPA operating conditions for apatite analysis were investigated: a range of electron beam settings, count times, crystal grain orientations, and calibration standards were tested. Twenty synthetic anhydrous apatite samples that span the fluorapatite-chlorapatite solid solution series, and whose halogen concentrations were determined by wet chemistry, were analyzed. Accurate measurements of these samples were obtained with many EMPA techniques. One effective method includes setting a static electron beam to 10-15nA, 15kV, and 10 microns in diameter. Additionally, the apatite sample is oriented with the crystal’s c-axis parallel to the slide surface and the count times are moderate. Importantly, the F and Cl EMPA concentrations are in extremely good agreement with the wet-chemical data. We also present EMPA operating conditions and techniques that are problematic and should be avoided. J.C. Stormer, Jr. et al., Am. Mineral. 78 (1993) 641-648.