Verification of EPA's " Preliminary remediation goals for radionuclides" (PRG) electronic calculator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stagich, B. H.
The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides information on establishing PRGs for radionuclides at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites with radioactive contamination (Verification Study Charge, Background). These risk-based PRGs set concentration limits using carcinogenic toxicity values under specific exposure conditions (PRG User’s Guide, Section 1). The purpose of this verification study is to ascertain that the computer codes has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly.
Verification of Gyrokinetic codes: Theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia; Bottino, Alberto; Görler, Tobias; Sonnendrücker, Eric; Told, Daniel; Villard, Laurent
2017-05-01
In fusion plasmas, the strong magnetic field allows the fast gyro-motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the subsequent transport. Naturally, these codes require thorough verification and validation. Here, we present a new and generic theoretical framework and specific numerical applications to test the faithfulness of the implemented models to theory and to verify the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which has rarely been done and therefore makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The verification of the numerical scheme is proposed via the benchmark effort. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC) and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations implemented in the ORB5 and GENE codes using the Lagrangian variational formulation. At the computational level, detailed verifications of global electromagnetic test cases developed from the CYCLONE Base Case are considered, including a parametric β-scan covering the transition from ITG to KBM and the spectral properties at the nominal β value.
Verification of Gyrokinetic codes: theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia
2016-10-01
In fusion plasmas the strong magnetic field allows the fast gyro motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the consequent transport. We present a new and generic theoretical framework and specific numerical applications to test the validity and the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The indirect verification of numerical scheme is proposed via the Benchmark process. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC), and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations using the generic variational formulation. Then, we derive and include the models implemented in ORB5 and GENE inside this hierarchy. At the computational level, detailed verification of global electromagnetic test cases based on the CYCLONE are considered, including a parametric β-scan covering the transition between the ITG to KBM and the spectral properties at the nominal β value.
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Separating stages of arithmetic verification: An ERP study with a novel paradigm.
Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes
2015-08-01
In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.
Investigation, Development, and Evaluation of Performance Proving for Fault-tolerant Computers
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Schwartz, R.; Hare, D.; Moore, J. S.; Melliar-Smith, P. M.; Shostak, R. E.; Boyer, R. S.; Green, M. W.; Elliott, W. D.
1983-01-01
A number of methodologies for verifying systems and computer based tools that assist users in verifying their systems were developed. These tools were applied to verify in part the SIFT ultrareliable aircraft computer. Topics covered included: STP theorem prover; design verification of SIFT; high level language code verification; assembly language level verification; numerical algorithm verification; verification of flight control programs; and verification of hardware logic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashcraft, C. Chace; Niederhaus, John Henry; Robinson, Allen C.
We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specializedmore » approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.« less
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee
2015-09-01
This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.
1982-07-01
Design Military Specification 20. ABSTRACT (Co.itInue o., revorse.• de if ,lec0O.•,•dr’ 1 Id•ntnify’’ bY bI. rk ,CI•,ti lim r) ’This document is published...criteria to Insert into the basic requirements, verification proceduresand leason: learned from past experience. The Standkerd will thus be the framework ...equations with sketch: Each of the three paired quantities in the de - nominators of KS and KR should have a hir over them, as is done in the numerators
Alternative Nonvolatile Residue Analysis with Contaminant Identification Project
NASA Technical Reports Server (NTRS)
Loftin, Kathleen (Compiler); Summerfield, Burton (Compiler); Thompson, Karen (Compiler); Mullenix, Pamela (Compiler); Zeitlin, Nancy (Compiler)
2015-01-01
Cleanliness verification is required in numerous industries including spaceflight ground support, electronics, medical and aerospace. Currently at KSC requirement for cleanliness verification use solvents that environmentally unfriendly. This goal of this project is to produce an alternative cleanliness verification technique that is both environmentally friendly and more cost effective.
Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation
NASA Technical Reports Server (NTRS)
Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna
2000-01-01
This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.
A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables
NASA Astrophysics Data System (ADS)
Huang, Laura X.; Isaac, George A.; Sheng, Grant
2014-01-01
This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Phyllis C.
A team from ORAU's Independent Environmental Assessment and Verification Program performed verification survey activities on the South Test Tower and four Hazardous Waste Storage Lockers. Scan data collected by ORAU determined that both the alpha and alpha-plus-beta activity was representative of radiological background conditions. The count rate distribution showed no outliers that would be indicative of alpha or alpha-plus-beta count rates in excess of background. It is the opinion of ORAU that independent verification data collected support the site?s conclusions that the South Tower and Lockers sufficiently meet the site criteria for release to recycle and reuse.
Cost-Effective CNC Part Program Verification Development for Laboratory Instruction.
ERIC Educational Resources Information Center
Chen, Joseph C.; Chang, Ted C.
2000-01-01
Describes a computer numerical control program verification system that checks a part program before its execution. The system includes character recognition, word recognition, a fuzzy-nets system, and a tool path viewer. (SK)
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F.
2011-12-01
Almost all natural phenomena on Earth are highly nonlinear. Even simplifications to the equations describing nature usually end up being nonlinear partial differential equations. Transport (ADR) equation is a pivotal equation in atmospheric sciences and water quality. This nonlinear equation needs to be solved numerically for practical purposes so academicians and engineers thoroughly rely on the assistance of numerical codes. Thus, numerical codes require verification before they are utilized for multiple applications in science and engineering. Model verification is a mathematical procedure whereby a numerical code is checked to assure the governing equation is properly solved as it is described in the design document. CFD verification is not a straightforward and well-defined course. Only a complete test suite can uncover all the limitations and bugs. Results are needed to be assessed to make a distinction between bug-induced-defect and innate limitation of a numerical scheme. As Roache (2009) said, numerical verification is a state-of-the-art procedure. Sometimes novel tricks work out. This study conveys the synopsis of the experiences we gained during a comprehensive verification process which was done for a transport solver. A test suite was designed including unit tests and algorithmic tests. Tests were layered in complexity in several dimensions from simple to complex. Acceptance criteria defined for the desirable capabilities of the transport code such as order of accuracy, mass conservation, handling stiff source term, spurious oscillation, and initial shape preservation. At the begining, mesh convergence study which is the main craft of the verification is performed. To that end, analytical solution of ADR equation gathered. Also a new solution was derived. In the more general cases, lack of analytical solution could be overcome through Richardson Extrapolation and Manufactured Solution. Then, two bugs which were concealed during the mesh convergence study uncovered with the method of false injection and visualization of the results. Symmetry had dual functionality: there was a bug, which was hidden due to the symmetric nature of a test (it was detected afterward utilizing artificial false injection), on the other hand self-symmetry was used to design a new test, and in a case the analytical solution of the ADR equation was unknown. Assisting subroutines designed to check and post-process conservation of mass and oscillatory behavior. Finally, capability of the solver also checked for stiff reaction source term. The above test suite not only was a decent tool of error detection but also it provided a thorough feedback on the ADR solvers limitations. Such information is the crux of any rigorous numerical modeling for a modeler who deals with surface/subsurface pollution transport.
ENVIRONMENTAL TECHNOLOGY VERIFICATION (ETV) PROGRAM: WET-WEATHER FLOW/SOURCE WATER PROTECTION
This paper presents an overview of the Environmental Protection Agency's (EPA) Environmental Technology Verification (ETV) program which was established to overcome the numerous impediments to commercialization experienced by developers of innovative environmental technologies. ...
Enhanced verification test suite for physics simulation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.
2008-09-01
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.
NASA Technical Reports Server (NTRS)
Stamnes, K.; Lie-Svendsen, O.; Rees, M. H.
1991-01-01
The linear Boltzmann equation can be cast in a form mathematically identical to the radiation-transport equation. A multigroup procedure is used to reduce the energy (or velocity) dependence of the transport equation to a series of one-speed problems. Each of these one-speed problems is equivalent to the monochromatic radiative-transfer problem, and existing software is used to solve this problem in slab geometry. The numerical code conserves particles in elastic collisions. Generic examples are provided to illustrate the applicability of this approach. Although this formalism can, in principle, be applied to a variety of test particle or linearized gas dynamics problems, it is particularly well-suited to study the thermalization of suprathermal particles interacting with a background medium when the thermal motion of the background cannot be ignored. Extensions of the formalism to include external forces and spherical geometry are also feasible.
Microcode Verification Project.
1980-05-01
numerical constant. The internal syntax for these minimum and maximum values is REALMIN and REALMAX. ISPSSIMP ISPSSIMP is the file simplifying bitstring...To be fair , it is quito clear that much of the ILbor Il tile verification task can be reduced If verification and. code development are carried out...basi.a of and the language we have chosen for both encoding our descriptions of machines and reasoning about the course of computations. Internally , our
2016-10-01
comes when considering numerous scores and statistics during a preliminary evaluation of the applicability of the fuzzy- verification minimum coverage...The selection of thresholds with which to generate categorical-verification scores and statistics from the application of both traditional and...of statistically significant numbers of cases; the latter presents a challenge of limited application for assessment of the forecast models’ ability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Andrs, David; Martineau, Richard Charles
This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
This paper presents a brief overview of the EPA's ETV program which was established in 1995 to overcome the numerous impediments to commercialization experienced by developers of innovative environmental technologies. Among those most frequently mentioned is the lack of credible ...
This paper presents a brief overview of EPA's ETV program established in 1995 to overcome the numerous impediments to commercialization experienced by developers of innovative environmental technologies. Among those most frequently mentioned is the lack of credible performance da...
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verificationmore » test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.« less
International Cooperative for Aerosol Prediction Workshop on Aerosol Forecast Verification
NASA Technical Reports Server (NTRS)
Benedetti, Angela; Reid, Jeffrey S.; Colarco, Peter R.
2011-01-01
The purpose of this workshop was to reinforce the working partnership between centers who are actively involved in global aerosol forecasting, and to discuss issues related to forecast verification. Participants included representatives from operational centers with global aerosol forecasting requirements, a panel of experts on Numerical Weather Prediction and Air Quality forecast verification, data providers, and several observers from the research community. The presentations centered on a review of current NWP and AQ practices with subsequent discussion focused on the challenges in defining appropriate verification measures for the next generation of aerosol forecast systems.
NASA Technical Reports Server (NTRS)
Benedetti, Angela; Reid, Jeffrey S.; Colarco, Peter R.
2011-01-01
The purpose of this workshop was to reinforce the working partnership between centers who are actively involved in global aerosol forecasting, and to discuss issues related to forecast verification. Participants included representatives from operational centers with global aerosol forecasting requirements, a panel of experts on Numerical Weather Prediction and Air Quality forecast verification, data providers, and several observers from the research community. The presentations centered on a review of current NWP and AQ practices with subsequent discussion focused on the challenges in defining appropriate verification measures for the next generation of aerosol forecast systems.
Verification and Validation Studies for the LAVA CFD Solver
NASA Technical Reports Server (NTRS)
Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.
2013-01-01
The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.
1982-01-29
N - Nw .VA COMPUTER PROGRAM USER’S MANUAL FOR . 0FIREFINDER DIGITAL TOPOGRAPHIC DATA VERIFICATION LIBRARY DUBBING SYSTEM VOLUME II DUBBING 29 JANUARY...Digital Topographic Data Verification Library Dubbing System, Volume II, Dubbing 6. PERFORMING ORG. REPORT NUMER 7. AUTHOR(q) S. CONTRACT OR GRANT...Software Library FIREFINDER Dubbing 20. ABSTRACT (Continue an revWee *Ide II necessary end identify by leek mauber) PThis manual describes the computer
Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William; Budzien, Joanne Louise; Ferguson, Jim Michael
This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents servemore » as the compilation of results demonstrating accomplishment of these objectives.« less
Enhanced Verification Test Suite for Physics Simulation Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, J R; Brock, J S; Brandon, S T
2008-10-10
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less
Experimental and Numerical Study of Drift Alfv'en Waves in LAPD
NASA Astrophysics Data System (ADS)
Friedman, Brett; Popovich, P.; Carter, T. A.; Auerbach, D.; Schaffner, D.
2009-11-01
We present a study of drift Alfv'en waves in linear geometry using experiments in the Large Plasma Device (LAPD) at UCLA and simulations from the Boundary Turbulence code (BOUT). BOUT solves the 3D time evolution of plasma parameters and turbulence using Braginskii fluid equations. First, we present a verification study of linear drift Alfven wave physics in BOUT, which has been modified to simulate the cylindrical geometry of LAPD. Second, we present measurements of density and magnetic field fluctuations in the LAPD plasma and the correlation of these fluctuations as a function of plasma parameters, including strength of the background field and discharge current. We also compare the measurements to nonlinear BOUT calculations using experimental LAPD profiles.
Catarinucci, L; Tarricone, L
2009-12-01
With the next transposition of the 2004/40/EC Directive, employers will become responsible for the electromagnetic field level at the workplace. To make this task easier, the scientific community is compiling practical guidelines to be followed. This work aims at enriching such guidelines, especially for the dosimetric issues. More specifically, some critical aspects related to the application of numerical dosimetric techniques for the verification of the safety limit compliance have been highlighted. In particular, three different aspects have been considered: the dosimetric parameter dependence on the shape and the inner characterisation of the exposed subject as well as on the numerical algorithm used, and the correlation between reference limits and basic restriction. Results and discussions demonstrate how, even by using sophisticated numerical techniques, in some cases a complex interpretation of the result is mandatory.
Numerical Modeling of Ablation Heat Transfer
NASA Technical Reports Server (NTRS)
Ewing, Mark E.; Laker, Travis S.; Walker, David T.
2013-01-01
A unique numerical method has been developed for solving one-dimensional ablation heat transfer problems. This paper provides a comprehensive description of the method, along with detailed derivations of the governing equations. This methodology supports solutions for traditional ablation modeling including such effects as heat transfer, material decomposition, pyrolysis gas permeation and heat exchange, and thermochemical surface erosion. The numerical scheme utilizes a control-volume approach with a variable grid to account for surface movement. This method directly supports implementation of nontraditional models such as material swelling and mechanical erosion, extending capabilities for modeling complex ablation phenomena. Verifications of the numerical implementation are provided using analytical solutions, code comparisons, and the method of manufactured solutions. These verifications are used to demonstrate solution accuracy and proper error convergence rates. A simple demonstration of a mechanical erosion (spallation) model is also provided to illustrate the unique capabilities of the method.
High stakes in INF verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krepon, M.
1987-06-01
The stakes involved in negotiating INF verification arrangements are high. While these proposals deal only with intermediate-range ground-launched cruise and mobile missiles, if properly devised they could help pave the way for comprehensive limits on other cruise missiles and strategic mobile missiles. In contrast, poorly drafted monitoring provisions could compromise national industrial security and generate numerous compliance controversies. Any verification regime will require new openness on both sides, but that means significant risks as well as opportunities. US and Soviet negotiators could spend weeks, months, and even years working out in painstaking detail verification provisions for medium-range missiles. Alternatively, ifmore » the two sides wished to conclude an INF agreement quickly, they could defer most of the difficult verification issues to the strategic arms negotiations.« less
Spacecraft charging analysis with the implicit particle-in-cell code iPic3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deca, J.; Lapenta, G.; Marchand, R.
2013-10-15
We present the first results on the analysis of spacecraft charging with the implicit particle-in-cell code iPic3D, designed for running on massively parallel supercomputers. The numerical algorithm is presented, highlighting the implementation of the electrostatic solver and the immersed boundary algorithm; the latter which creates the possibility to handle complex spacecraft geometries. As a first step in the verification process, a comparison is made between the floating potential obtained with iPic3D and with Orbital Motion Limited theory for a spherical particle in a uniform stationary plasma. Second, the numerical model is verified for a CubeSat benchmark by comparing simulation resultsmore » with those of PTetra for space environment conditions with increasing levels of complexity. In particular, we consider spacecraft charging from plasma particle collection, photoelectron and secondary electron emission. The influence of a background magnetic field on the floating potential profile near the spacecraft is also considered. Although the numerical approaches in iPic3D and PTetra are rather different, good agreement is found between the two models, raising the level of confidence in both codes to predict and evaluate the complex plasma environment around spacecraft.« less
Implementation of Precision Verification Solvents on the External Tank
NASA Technical Reports Server (NTRS)
Campbell, M.
1998-01-01
This paper presents the Implementation of Precision Verification Solvents on the External Tank. The topics include: 1) Background; 2) Solvent Usages; 3) TCE (Trichloroethylene) Reduction; 4) Solvent Replacement Studies; 5) Implementation; 6) Problems Occuring During Implementation; and 7) Future Work. This paper is presented in viewgraph form.
Verification of Java Programs using Symbolic Execution and Invariant Generation
NASA Technical Reports Server (NTRS)
Pasareanu, Corina; Visser, Willem
2004-01-01
Software verification is recognized as an important and difficult problem. We present a norel framework, based on symbolic execution, for the automated verification of software. The framework uses annotations in the form of method specifications an3 loop invariants. We present a novel iterative technique that uses invariant strengthening and approximation for discovering these loop invariants automatically. The technique handles different types of data (e.g. boolean and numeric constraints, dynamically allocated structures and arrays) and it allows for checking universally quantified formulas. Our framework is built on top of the Java PathFinder model checking toolset and it was used for the verification of several non-trivial Java programs.
Verification of chemistry reference ranges using a simple method in sub-Saharan Africa
Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania
2016-01-01
Background Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. Objectives We report on an easy and cost-saving method to verify CRRs. Methods Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Results Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. Conclusion To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory. PMID:28879112
Verification on spray simulation of a pintle injector for liquid rocket engine
NASA Astrophysics Data System (ADS)
Son, Min; Yu, Kijeong; Radhakrishnan, Kanmaniraja; Shin, Bongchul; Koo, Jaye
2016-02-01
The pintle injector used for a liquid rocket engine is a newly re-attracted injection system famous for its wide throttle ability with high efficiency. The pintle injector has many variations with complex inner structures due to its moving parts. In order to study the rotating flow near the injector tip, which was observed from the cold flow experiment using water and air, a numerical simulation was adopted and a verification of the numerical model was later conducted. For the verification process, three types of experimental data including velocity distributions of gas flows, spray angles and liquid distribution were all compared using simulated results. The numerical simulation was performed using a commercial simulation program with the Eulerian multiphase model and axisymmetric two dimensional grids. The maximum and minimum velocities of gas were within the acceptable range of agreement, however, the spray angles experienced up to 25% error when the momentum ratios were increased. The spray density distributions were quantitatively measured and had good agreement. As a result of this study, it was concluded that the simulation method was properly constructed to study specific flow characteristics of the pintle injector despite having the limitations of two dimensional and coarse grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, P., E-mail: paolo.ricci@epfl.ch; Riva, F.; Theiler, C.
In the present work, a Verification and Validation procedure is presented and applied showing, through a practical example, how it can contribute to advancing our physics understanding of plasma turbulence. Bridging the gap between plasma physics and other scientific domains, in particular, the computational fluid dynamics community, a rigorous methodology for the verification of a plasma simulation code is presented, based on the method of manufactured solutions. This methodology assesses that the model equations are correctly solved, within the order of accuracy of the numerical scheme. The technique to carry out a solution verification is described to provide a rigorousmore » estimate of the uncertainty affecting the numerical results. A methodology for plasma turbulence code validation is also discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The Verification and Validation methodology is then applied to the study of plasma turbulence in the basic plasma physics experiment TORPEX [Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulations carried out with the GBS code [Ricci et al., Plasma Phys. Controlled Fusion 54, 124047 (2012)]. The validation procedure allows progress in the understanding of the turbulent dynamics in TORPEX, by pinpointing the presence of a turbulent regime transition, due to the competition between the resistive and ideal interchange instabilities.« less
Shuttle payload interface verification equipment study. Volume 2: Technical document, part 1
NASA Technical Reports Server (NTRS)
1976-01-01
The technical analysis is reported that was performed during the shuttle payload interface verification equipment study. It describes: (1) the background and intent of the study; (2) study approach and philosophy covering all facets of shuttle payload/cargo integration; (3)shuttle payload integration requirements; (4) preliminary design of the horizontal IVE; (5) vertical IVE concept; and (6) IVE program development plans, schedule and cost. Also included is a payload integration analysis task to identify potential uses in addition to payload interface verification.
22 CFR 97.3 - Requirements subject to verification in an outgoing Convention case.
Code of Federal Regulations, 2014 CFR
2014-04-01
... background study. An accredited agency, temporarily accredited agency, or public domestic authority must complete or approve a child background study that includes information about the child's identity, adoptability, background, social environment, family history, medical history (including that of the child's...
22 CFR 97.3 - Requirements subject to verification in an outgoing Convention case.
Code of Federal Regulations, 2012 CFR
2012-04-01
... background study. An accredited agency, temporarily accredited agency, or public domestic authority must complete or approve a child background study that includes information about the child's identity, adoptability, background, social environment, family history, medical history (including that of the child's...
22 CFR 97.3 - Requirements subject to verification in an outgoing Convention case.
Code of Federal Regulations, 2011 CFR
2011-04-01
... background study. An accredited agency, temporarily accredited agency, or public domestic authority must complete or approve a child background study that includes information about the child's identity, adoptability, background, social environment, family history, medical history (including that of the child's...
22 CFR 97.3 - Requirements subject to verification in an outgoing Convention case.
Code of Federal Regulations, 2013 CFR
2013-04-01
... background study. An accredited agency, temporarily accredited agency, or public domestic authority must complete or approve a child background study that includes information about the child's identity, adoptability, background, social environment, family history, medical history (including that of the child's...
22 CFR 97.3 - Requirements subject to verification in an outgoing Convention case.
Code of Federal Regulations, 2010 CFR
2010-04-01
... background study. An accredited agency, temporarily accredited agency, or public domestic authority must complete or approve a child background study that includes information about the child's identity, adoptability, background, social environment, family history, medical history (including that of the child's...
Verification and quality control of routine hematology analyzers.
Vis, J Y; Huisman, A
2016-05-01
Verification of hematology analyzers (automated blood cell counters) is mandatory before new hematology analyzers may be used in routine clinical care. The verification process consists of several items which comprise among others: precision, accuracy, comparability, carryover, background and linearity throughout the expected range of results. Yet, which standard should be met or which verification limit be used is at the discretion of the laboratory specialist. This paper offers practical guidance on verification and quality control of automated hematology analyzers and provides an expert opinion on the performance standard that should be met by the contemporary generation of hematology analyzers. Therefore (i) the state-of-the-art performance of hematology analyzers for complete blood count parameters is summarized, (ii) considerations, challenges, and pitfalls concerning the development of a verification plan are discussed, (iii) guidance is given regarding the establishment of reference intervals, and (iv) different methods on quality control of hematology analyzers are reviewed. © 2016 John Wiley & Sons Ltd.
Verification of Numerical Programs: From Real Numbers to Floating Point Numbers
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec
2013-01-01
Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.
URANS simulations of the tip-leakage cavitating flow with verification and validation procedures
NASA Astrophysics Data System (ADS)
Cheng, Huai-yu; Long, Xin-ping; Liang, Yun-zhi; Long, Yun; Ji, Bin
2018-04-01
In the present paper, the Vortex Identified Zwart-Gerber-Belamri (VIZGB) cavitation model coupled with the SST-CC turbulence model is used to investigate the unsteady tip-leakage cavitating flow induced by a NACA0009 hydrofoil. A qualitative comparison between the numerical and experimental results is made. In order to quantitatively evaluate the reliability of the numerical data, the verification and validation (V&V) procedures are used in the present paper. Errors of numerical results are estimated with seven error estimators based on the Richardson extrapolation method. It is shown that though a strict validation cannot be achieved, a reasonable prediction of the gross characteristics of the tip-leakage cavitating flow can be obtained. Based on the numerical results, the influence of the cavitation on the tip-leakage vortex (TLV) is discussed, which indicates that the cavitation accelerates the fusion of the TLV and the tip-separation vortex (TSV). Moreover, the trajectory of the TLV, when the cavitation occurs, is close to the side wall.
Glove-based approach to online signature verification.
Kamel, Nidal S; Sayeed, Shohel; Ellis, Grant A
2008-06-01
Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
NASA Astrophysics Data System (ADS)
Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott
2017-06-01
The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.
Numerical verification of composite rods theory on multi-story buildings analysis
NASA Astrophysics Data System (ADS)
El-Din Mansour, Alaa; Filatov, Vladimir; Gandzhuntsev, Michael; Ryasny, Nikita
2018-03-01
In the article, a verification proposal of the composite rods theory on the structural analysis of skeletons for high-rise buildings. A testing design model been formed on which horizontal elements been represented by a multilayer cantilever beam operates on transverse bending on which slabs are connected with a moment-non-transferring connections and a multilayer columns represents the vertical elements. Those connections are sufficiently enough to form a shearing action can be approximated by a certain shear forces function, the thing which significantly reduces the overall static indeterminacy degree of the structural model. A system of differential equations describe the operation mechanism of the multilayer rods that solved using the numerical approach of successive approximations method. The proposed methodology to be used while preliminary calculations for the sake of determining the rigidity characteristics of the structure; are needed. In addition, for a qualitative assessment of the results obtained by other methods when performing calculations with the verification aims.
TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moridis, G.J.; Pruess
1992-11-01
The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for propermore » applications of TOUGH and related codes.« less
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2013-12-01
ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.
NASA Astrophysics Data System (ADS)
Raikovskiy, N. A.; Tretyakov, A. V.; Abramov, S. A.; Nazmeev, F. G.; Pavlichev, S. V.
2017-08-01
The paper presents a numerical study method of the cooling medium flowing in the water jacket of self-lubricating sliding bearing based on ANSYS CFX. The results of numerical calculations have satisfactory convergence with the empirical data obtained on the testbed. Verification data confirm the possibility of applying this numerical technique for the analysis of coolant flowings in the self-lubricating bearing containing the water jacket.
NASA Astrophysics Data System (ADS)
Zhong, Shenlu; Li, Mengjiao; Tang, Xiajie; He, Weiqing; Wang, Xiaogang
2017-01-01
A novel optical information verification and encryption method is proposed based on inference principle and phase retrieval with sparsity constraints. In this method, a target image is encrypted into two phase-only masks (POMs), which comprise sparse phase data used for verification. Both of the two POMs need to be authenticated before being applied for decrypting. The target image can be optically reconstructed when the two authenticated POMs are Fourier transformed and convolved by the correct decryption key, which is also generated in encryption process. No holographic scheme is involved in the proposed optical verification and encryption system and there is also no problem of information disclosure in the two authenticable POMs. Numerical simulation results demonstrate the validity and good performance of this new proposed method.
Combining Task Execution and Background Knowledge for the Verification of Medical Guidelines
NASA Astrophysics Data System (ADS)
Hommersom, Arjen; Groot, Perry; Lucas, Peter; Balser, Michael; Schmitt, Jonathan
The use of a medical guideline can be seen as the execution of computational tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a 'network of tasks', i.e., as a number of steps that have a specific function or goal. To investigate the quality of such guidelines we propose a formalization of criteria for good practice medicine a guideline should comply to. We use this theory in conjunction with medical background knowledge to verify the quality of a guideline dealing with diabetes mellitus type 2 using the interactive theorem prover KIV. Verification using task execution and background knowledge is a novel approach to quality checking of medical guidelines.
Mesh and Time-Step Independent Computational Fluid Dynamics (CFD) Solutions
ERIC Educational Resources Information Center
Nijdam, Justin J.
2013-01-01
A homework assignment is outlined in which students learn Computational Fluid Dynamics (CFD) concepts of discretization, numerical stability and accuracy, and verification in a hands-on manner by solving physically realistic problems of practical interest to engineers. The students solve a transient-diffusion problem numerically using the common…
Benchmarking on Tsunami Currents with ComMIT
NASA Astrophysics Data System (ADS)
Sharghi vand, N.; Kanoglu, U.
2015-12-01
There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)
Bor, E; Turduev, M; Kurt, H
2016-08-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.
Bor, E.; Turduev, M.; Kurt, H.
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
NASA Astrophysics Data System (ADS)
Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.
2009-12-01
During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions (consistent with the ESG2006 exercise which targeted the Grenoble Valley). Diffractions off the basin edges and induced surface-wave propagation mainly contribute to differences between predictions. The differences are particularly large in the elastic models but remain important also in models with attenuation. In the validation, predictions are compared with the recordings by a local array of 19 surface and borehole accelerometers. The level of agreement is found event-dependent. For the largest-magnitude event the agreement is surprisingly good even at high frequencies.
Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.
de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M
2012-04-15
A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
Verification of ANSYS Fluent and OpenFOAM CFD platforms for prediction of impact flow
NASA Astrophysics Data System (ADS)
Tisovská, Petra; Peukert, Pavel; Kolář, Jan
The main goal of the article is a verification of the heat transfer coefficient numerically predicted by two CDF platforms - ANSYS-Fluent and OpenFOAM on the problem of impact flows oncoming from 2D nozzle. Various mesh parameters and solver settings were tested under several boundary conditions and compared to known experimental results. The best solver setting, suitable for further optimization of more complex geometry is evaluated.
Dynamic analysis for shuttle design verification
NASA Technical Reports Server (NTRS)
Fralich, R. W.; Green, C. E.; Rheinfurth, M. H.
1972-01-01
Two approaches that are used for determining the modes and frequencies of space shuttle structures are discussed. The first method, direct numerical analysis, involves finite element mathematical modeling of the space shuttle structure in order to use computer programs for dynamic structural analysis. The second method utilizes modal-coupling techniques of experimental verification made by vibrating only spacecraft components and by deducing modes and frequencies of the complete vehicle from results obtained in the component tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dartevelle, Sebastian
2007-10-01
Large-scale volcanic eruptions are hazardous events that cannot be described by detailed and accurate in situ measurement: hence, little to no real-time data exists to rigorously validate current computer models of these events. In addition, such phenomenology involves highly complex, nonlinear, and unsteady physical behaviors upon many spatial and time scales. As a result, volcanic explosive phenomenology is poorly understood in terms of its physics, and inadequately constrained in terms of initial, boundary, and inflow conditions. Nevertheless, code verification and validation become even more critical because more and more volcanologists use numerical data for assessment and mitigation of volcanic hazards.more » In this report, we evaluate the process of model and code development in the context of geophysical multiphase flows. We describe: (1) the conception of a theoretical, multiphase, Navier-Stokes model, (2) its implementation into a numerical code, (3) the verification of the code, and (4) the validation of such a model within the context of turbulent and underexpanded jet physics. Within the validation framework, we suggest focusing on the key physics that control the volcanic clouds—namely, momentum-driven supersonic jet and buoyancy-driven turbulent plume. For instance, we propose to compare numerical results against a set of simple and well-constrained analog experiments, which uniquely and unambiguously represent each of the key-phenomenology. Key« less
NASA Astrophysics Data System (ADS)
Chen, Peng; Liu, Yuwei; Gao, Bingkun; Jiang, Chunlei
2018-03-01
A semiconductor laser employed with two-external-cavity feedback structure for laser self-mixing interference (SMI) phenomenon is investigated and analyzed. The SMI model with two directions based on F-P cavity is deduced, and numerical simulation and experimental verification were conducted. Experimental results show that the SMI with the structure of two-external-cavity feedback under weak light feedback is similar to the sum of two SMIs.
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
National Centers for Environmental Prediction
/ VISION | About EMC EMC > NOAH > PEOPLE Home Operational Products Experimental Data Verification / Development Contacts Change Log Events Calendar Events People Numerical Forecast Systems Coming Soon. NOAA
Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber
NASA Astrophysics Data System (ADS)
Yuen, A.; Bombardelli, F. A.
2014-12-01
Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on that the computation uncertainty was quantified. The numerical results reveal that the bed shear stress distribution for the U-GEMS model was not uniform. The mean and standard deviation of the bed shear stress for the U-GEMS model was 0.04 and 0.019 Pa respectively.
1989-07-01
TECHNICAL REPORT HL-89-14 VERIFICATION OF THE HYDRODYNAMIC AND Si SEDIMENT TRANSPORT HYBRID MODELING SYSTEM FOR CUMBERLAND SOUND AND I’) KINGS BAY...Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia 12 PERSONAL AUTHOR(S) Granat...Hydrodynamic results from RMA-2V were used in the numerical sediment transport code STUDH in modeling the interaction of the flow transport and
NASA Astrophysics Data System (ADS)
Clempner, Julio B.
2017-01-01
This paper presents a novel analytical method for soundness verification of workflow nets and reset workflow nets, using the well-known stability results of Lyapunov for Petri nets. We also prove that the soundness property is decidable for workflow nets and reset workflow nets. In addition, we provide evidence of several outcomes related with properties such as boundedness, liveness, reversibility and blocking using stability. Our approach is validated theoretically and by a numerical example related to traffic signal-control synchronisation.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Verification and Trust: Background Investigations Preceding Faculty Appointment
ERIC Educational Resources Information Center
Finkin, Matthew W.; Post, Robert C.; Thomson, Judith J.
2004-01-01
Many employers in the United States have responded to the terrorist attacks of September 11, 2001, by initiating or expanding policies requiring background checks of prospective employees. Their ability to perform such checks has been abetted by the growth of computerized databases and of commercial enterprises that facilitate access to personal…
Verification and Trust: Background Investigations Preceding Faculty Appointment
ERIC Educational Resources Information Center
Academe, 2004
2004-01-01
Many employers in the United States have been initiating or expanding policies requiring background checks of prospective employees. The ability to perform such checks has been abetted by the growth of computerized databases and of commercial enterprises that facilitate access to personal information. Employers now have ready access to public…
Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...
2014-01-01
This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less
Formal Verification at System Level
NASA Astrophysics Data System (ADS)
Mazzini, S.; Puri, S.; Mari, F.; Melatti, I.; Tronci, E.
2009-05-01
System Level Analysis calls for a language comprehensible to experts with different background and yet precise enough to support meaningful analyses. SysML is emerging as an effective balance between such conflicting goals. In this paper we outline some the results obtained as for SysML based system level functional formal verification by an ESA/ESTEC study, with a collaboration among INTECS and La Sapienza University of Roma. The study focuses on SysML based system level functional requirements techniques.
NASA Astrophysics Data System (ADS)
Yang, Wenming; Wang, Pengkai; Hao, Ruican; Ma, Buchuan
2017-03-01
Analytical and numerical calculation methods of the radial magnetic levitation force on the cylindrical magnets in cylindrical vessels filled with ferrofluid was reviewed. An experimental apparatus to measure this force was designed and tailored, which could measure the forces in a range of 0-2.0 N with an accuracy of 0.001 N. After calibrated, this apparatus was used to study the radial magnetic levitation force experimentally. The results showed that the numerical method overestimates this force, while the analytical ones underestimate it. The maximum deviation between the numerical results and the experimental ones was 18.5%, while that between the experimental results with the analytical ones attained 68.5%. The latter deviation narrowed with the lengthening of the magnets. With the aids of the experimental verification of the radial magnetic levitation force, the effect of eccentric distance of magnets on the viscous energy dissipation in ferrofluid dampers could be assessed. It was shown that ignorance of the eccentricity of magnets during the estimation could overestimate the viscous dissipation in ferrofluid dampers.
NASA Astrophysics Data System (ADS)
Remy, Samuel; Benedetti, Angela; Jones, Luke; Razinger, Miha; Haiden, Thomas
2014-05-01
The WMO-sponsored Working Group on Numerical Experimentation (WGNE) set up a project aimed at understanding the importance of aerosols for numerical weather prediction (NWP). Three cases are being investigated by several NWP centres with aerosol capabilities: a severe dust case that affected Southern Europe in April 2012, a biomass burning case in South America in September 2012, and an extreme pollution event in Beijing (China) which took place in January 2013. At ECMWF these cases are being studied using the MACC-II system with radiatively interactive aerosols. Some preliminary results related to the dust and the fire event will be presented here. A preliminary verification of the impact of the aerosol-radiation direct interaction on surface meteorological parameters such as 2m Temperature and surface winds over the region of interest will be presented. Aerosol optical depth (AOD) verification using AERONET data will also be discussed. For the biomass burning case, the impact of using injection heights estimated by a Plume Rise Model (PRM) for the biomass burning emissions will be presented.
Modeling tidal hydrodynamics of San Diego Bay, California
Wang, P.-F.; Cheng, R.T.; Richter, K.; Gross, E.S.; Sutton, D.; Gartner, J.W.
1998-01-01
In 1983, current data were collected by the National Oceanic and Atmospheric Administration using mechanical current meters. During 1992 through 1996, acoustic Doppler current profilers as well as mechanical current meters and tide gauges were used. These measurements not only document tides and tidal currents in San Diego Bay, but also provide independent data sets for model calibration and verification. A high resolution (100-m grid), depth-averaged, numerical hydrodynamic model has been implemented for San Diego Bay to describe essential tidal hydrodynamic processes in the bay. The model is calibrated using the 1983 data set and verified using the more recent 1992-1996 data. Discrepancies between model predictions and field data in beth model calibration and verification are on the order of the magnitude of uncertainties in the field data. The calibrated and verified numerical model has been used to quantify residence time and dilution and flushing of contaminant effluent into San Diego Bay. Furthermore, the numerical model has become an important research tool in ongoing hydrodynamic and water quality studies and in guiding future field data collection programs.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
NASA Astrophysics Data System (ADS)
Miedzinska, Danuta; Boczkowska, Anna; Zubko, Konrad
2010-07-01
In the article a method of numerical verification of experimental results for magnetorheological elastomer samples (MRE) is presented. The samples were shaped into cylinders with diameter of 8 mm and height of 20 mm with various carbonyl iron volume shares (1,5%, 11,5% and 33%). The diameter of soft ferromagnetic substance particles ranged from 6 to 9 μm. During the experiment, initially bended samples were exposed to the magnetic field with intensity levels at 0,1T, 0,3T, 0,5T, 0,7 and 1T. The reaction of the sample to the field action was measured as a displacement of a specimen. Numerical calculation was carried out with the MSC Patran/Marc computer code. For the purpose of numerical analysis the orthotropic material model with the material properties of magnetorheological elastomer along the iron chains, and of the pure elastomer along other directions, was applied. The material properties were obtained from the experimental tests. During the numerical analysis, the initial mechanical load resulting from cylinder deflection was set. Then, the equivalent external force, that was set on the basis of analytical calculations of intermolecular reaction within iron chains in the specific magnetic field, was put on the bended sample. Correspondence of such numerical model with results of the experiment was verified. Similar results of the experiments and both theoretical and FEM analysis indicates that macroscopic modeling of magnetorheological elastomer mechanical properties as orthotropic material delivers accurate enough description of the material's behavior.
Expert system verification and validation study: ES V/V Workshop
NASA Technical Reports Server (NTRS)
French, Scott; Hamilton, David
1992-01-01
The primary purpose of this document is to build a foundation for applying principles of verification and validation (V&V) of expert systems. To achieve this, some V&V as applied to conventionally implemented software is required. Part one will discuss the background of V&V from the perspective of (1) what is V&V of software and (2) V&V's role in developing software. Part one will also overview some common analysis techniques that are applied when performing V&V of software. All of these materials will be presented based on the assumption that the reader has little or no background in V&V or in developing procedural software. The primary purpose of part two is to explain the major techniques that have been developed for V&V of expert systems.
EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM
The U.S. Environmental Protection Agency (EPA) has evaluated technologies to determine their effectiveness in monitoring, preventing, controlling, and cleaning up pollution. Since the early 1990s, however, numerous government and private groups have determined that the lack of a...
WEC3: Wave Energy Converter Code Comparison Project: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien
This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less
Kim, Kimin; Park, Jong-Kyu; Boozer, Allen H
2013-05-03
This Letter presents the first numerical verification for the bounce-harmonic (BH) resonance phenomena of the neoclassical transport in a tokamak perturbed by nonaxisymmetric magnetic fields. The BH resonances were predicted by analytic theories of neoclassical toroidal viscosity (NTV), as the parallel and perpendicular drift motions can be resonant and result in a great enhancement of the radial momentum transport. A new drift-kinetic δf guiding-center particle code, POCA, clearly verified that the perpendicular drift motions can reduce the transport by phase-mixing, but in the BH resonances the motions can form closed orbits and particles radially drift out fast. The POCA calculations on resulting NTV torque are largely consistent with analytic calculations, and show that the BH resonances can easily dominate the NTV torque when a plasma rotates in the perturbed tokamak and therefore, is a critical physics for predicting the rotation and stability in the International Thermonuclear Experimental Reactor.
Exploration of Uncertainty in Glacier Modelling
NASA Technical Reports Server (NTRS)
Thompson, David E.
1999-01-01
There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.
Numerical Simulation of the Fluid-Structure Interaction of a Surface Effect Ship Bow Seal
NASA Astrophysics Data System (ADS)
Bloxom, Andrew L.
Numerical simulations of fluid-structure interaction (FSI) problems were performed in an effort to verify and validate a commercially available FSI tool. This tool uses an iterative partitioned coupling scheme between CD-adapco's STAR-CCM+ finite volume fluid solver and Simulia's Abaqus finite element structural solver to simulate the FSI response of a system. Preliminary verification and validation work (V&V) was carried out to understand the numerical behavior of the codes individually and together as a FSI tool. Verification and Validation work that was completed included code order verification of the respective fluid and structural solvers with Couette-Poiseuille flow and Euler-Bernoulli beam theory. These results confirmed the 2 nd order accuracy of the spatial discretizations used. Following that, a mixture of solution verifications and model calibrations was performed with the inclusion of the physics models implemented in the solution of the FSI problems. Solution verifications were completed for fluid and structural stand-alone models as well as for the coupled FSI solutions. These results re-confirmed the spatial order of accuracy but for more complex flows and physics models as well as the order of accuracy of the temporal discretizations. In lieu of a good material definition, model calibration is performed to reproduce the experimental results. This work used model calibration for both instances of hyperelastic materials which were presented in the literature as validation cases because these materials were defined as linear elastic. Calibrated, three dimensional models of the bow seal on the University of Michigan bow seal test platform showed the ability to reproduce the experimental results qualitatively through averaging of the forces and seal displacements. These simulations represent the only current 3D results for this case. One significant result of this study is the ability to visualize the flow around the seal and to directly measure the seal resistances at varying cushion pressures, seal immersions, forward speeds, and different seal materials. SES design analysis could greatly benefit from the inclusion of flexible seals in simulations, and this work is a positive step in that direction. In future work, the inclusion of more complex seal geometries and contact will further enhance the capability of this tool.
Automatic extraction of numeric strings in unconstrained handwritten document images
NASA Astrophysics Data System (ADS)
Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.
2011-01-01
Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.
Numerical Weather Predictions Evaluation Using Spatial Verification Methods
NASA Astrophysics Data System (ADS)
Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.
2014-12-01
During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain--Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is cofinanced by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007--2013).
NASA Astrophysics Data System (ADS)
Hueso-González, Fernando; Enghardt, Wolfgang; Fiedler, Fine; Golnik, Christian; Janssens, Guillaume; Petzoldt, Johannes; Prieels, Damien; Priegnitz, Marlen; Römer, Katja E.; Smeets, Julien; Vander Stappen, François; Wagner, Andreas; Pausch, Guntram
2015-08-01
Ion beam therapy promises enhanced tumour coverage compared to conventional radiotherapy, but particle range uncertainties significantly blunt the achievable precision. Experimental tools for range verification in real-time are not yet available in clinical routine. The prompt gamma ray timing method has been recently proposed as an alternative to collimated imaging systems. The detection times of prompt gamma rays encode essential information about the depth-dose profile thanks to the measurable transit time of ions through matter. In a collaboration between OncoRay, Helmholtz-Zentrum Dresden-Rossendorf and IBA, the first test at a clinical proton accelerator (Westdeutsches Protonentherapiezentrum Essen, Germany) with several detectors and phantoms is performed. The robustness of the method against background and stability of the beam bunch time profile is explored, and the bunch time spread is characterized for different proton energies. For a beam spot with a hundred million protons and a single detector, range differences of 5 mm in defined heterogeneous targets are identified by numerical comparison of the spectrum shape. For higher statistics, range shifts down to 2 mm are detectable. A proton bunch monitor, higher detector throughput and quantitative range retrieval are the upcoming steps towards a clinically applicable prototype. In conclusion, the experimental results highlight the prospects of this straightforward verification method at a clinical pencil beam and settle this novel approach as a promising alternative in the field of in vivo dosimetry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luke, S J
2011-12-20
This report describes a path forward for implementing information barriers in a future generic biological arms-control verification regime. Information barriers have become a staple of discussion in the area of arms control verification approaches for nuclear weapons and components. Information barriers when used with a measurement system allow for the determination that an item has sensitive characteristics without releasing any of the sensitive information. Over the last 15 years the United States (with the Russian Federation) has led on the development of information barriers in the area of the verification of nuclear weapons and nuclear components. The work of themore » US and the Russian Federation has prompted other states (e.g., UK and Norway) to consider the merits of information barriers for possible verification regimes. In the context of a biological weapons control verification regime, the dual-use nature of the biotechnology will require protection of sensitive information while allowing for the verification of treaty commitments. A major question that has arisen is whether - in a biological weapons verification regime - the presence or absence of a weapon pathogen can be determined without revealing any information about possible sensitive or proprietary information contained in the genetic materials being declared under a verification regime. This study indicates that a verification regime could be constructed using a small number of pathogens that spans the range of known biological weapons agents. Since the number of possible pathogens is small it is possible and prudent to treat these pathogens as analogies to attributes in a nuclear verification regime. This study has determined that there may be some information that needs to be protected in a biological weapons control verification regime. To protect this information, the study concludes that the Lawrence Livermore Microbial Detection Array may be a suitable technology for the detection of the genetic information associated with the various pathogens. In addition, it has been determined that a suitable information barrier could be applied to this technology when the verification regime has been defined. Finally, the report posits a path forward for additional development of information barriers in a biological weapons verification regime. This path forward has shown that a new analysis approach coined as Information Loss Analysis might need to be pursued so that a numerical understanding of how information can be lost in specific measurement systems can be achieved.« less
Verification of floating-point software
NASA Technical Reports Server (NTRS)
Hoover, Doug N.
1990-01-01
Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.
Photon-photon scattering at the high-intensity frontier
NASA Astrophysics Data System (ADS)
Gies, Holger; Karbstein, Felix; Kohlfürst, Christian; Seegert, Nico
2018-04-01
The tremendous progress in high-intensity laser technology and the establishment of dedicated high-field laboratories in recent years have paved the way towards a first observation of quantum vacuum nonlinearities at the high-intensity frontier. We advocate a particularly prospective scenario, where three synchronized high-intensity laser pulses are brought into collision, giving rise to signal photons, whose frequency and propagation direction differ from the driving laser pulses, thus providing various means to achieve an excellent signal to background separation. Based on the theoretical concept of vacuum emission, we employ an efficient numerical algorithm which allows us to model the collision of focused high-intensity laser pulses in unprecedented detail. We provide accurate predictions for the numbers of signal photons accessible in experiment. Our study is the first to predict the precise angular spread of the signal photons, and paves the way for a first verification of quantum vacuum nonlinearity in a well-controlled laboratory experiment at one of the many high-intensity laser facilities currently coming online.
Numerical study on 3D composite morphing actuators
NASA Astrophysics Data System (ADS)
Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru
2015-04-01
There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.
Verification and benchmark testing of the NUFT computer code
NASA Astrophysics Data System (ADS)
Lee, K. H.; Nitao, J. J.; Kulshrestha, A.
1993-10-01
This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.
NASA Astrophysics Data System (ADS)
Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge
2014-08-01
This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.
Numerical Simulations For the F-16XL Aircraft Configuration
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa A.; Abdol-Hamid, Khaled; Cavallo, Peter A.; Parlette, Edward B.
2014-01-01
Numerical simulations of flow around the F-16XL are presented as a contribution to the Cranked Arrow Wing Aerodynamic Project International II (CAWAPI-II). The NASA Tetrahedral Unstructured Software System (TetrUSS) is used to perform numerical simulations. This CFD suite, developed and maintained by NASA Langley Research Center, includes an unstructured grid generation program called VGRID, a postprocessor named POSTGRID, and the flow solver USM3D. The CRISP CFD package is utilized to provide error estimates and grid adaption for verification of USM3D results. A subsonic high angle-of-attack case flight condition (FC) 25 is computed and analyzed. Three turbulence models are used in the calculations: the one-equation Spalart-Allmaras (SA), the two-equation shear stress transport (SST) and the ke turbulence models. Computational results, and surface static pressure profiles are presented and compared with flight data. Solution verification is performed using formal grid refinement studies, the solution of Error Transport Equations, and adaptive mesh refinement. The current study shows that the USM3D solver coupled with CRISP CFD can be used in an engineering environment in predicting vortex-flow physics on a complex configuration at flight Reynolds numbers.
Evaluating shallow-flow rock structures as scour countermeasures at bridges.
DOT National Transportation Integrated Search
2009-12-01
A study to determine whether or not shallow-flow rock structures could reliably be used at bridge abutments in place of riprap. Research was conducted in a two-phase effort beginning with numerical modeling and ending with field verification of model...
Synesthesia affects verification of simple arithmetic equations.
Ghirardelli, Thomas G; Mills, Carol Bergfeld; Zilioli, Monica K C; Bailey, Leah P; Kretschmar, Paige K
2010-01-01
To investigate the effects of color-digit synesthesia on numerical representation, we presented a synesthete, called SE, in the present study, and controls with mathematical equations for verification. In Experiment 1, SE verified addition equations made up of digits that either matched or mismatched her color-digit photisms or were in black. In Experiment 2A, the addends were presented in the different color conditions and the solution was presented in black, whereas in Experiment 2B the addends were presented in black and the solutions were presented in the different color conditions. In Experiment 3, multiplication and division equations were presented in the same color conditions as in Experiment 1. SE responded significantly faster to equations that matched her photisms than to those that did not; controls did not show this effect. These results suggest that photisms influence the processing of digits in arithmetic verification, replicating and extending previous findings.
Study of the penetration of a plate made of titanium alloy VT6 with a steel ball
NASA Astrophysics Data System (ADS)
Buzyurkin, A. E.
2018-03-01
The purpose of this work is the development and verification of mathematical relationships, adapted to the package of finite element analysis LS-DYNA and describing the deformation and destruction of a titanium plate in a high-speed collision. Using data from experiments on the interaction of a steel ball with a titanium plate made of VT6 alloy, verification of the available constants necessary for describing the behavior of the material using the Johnson-Cook relationships was performed, as well as verification of the parameters of the fracture model used in the numerical modeling of the collision process. An analysis of experimental data on the interaction of a spherical impactor with a plate showed that the data accepted for VT6 alloy in the first approximation for deformation hardening in the Johnson-Cook model give too high results on the residual velocities of the impactor when piercing the plate.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorate (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
An Investigation into Solution Verification for CFD-DEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fullmer, William D.; Musser, Jordan
This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less
Computer Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less
ETV - HOMELAND SECURITY EVALUATION OF CYANIDE DETECTORS
EPA's Environmental Technology Verification (ETV) Program was established in 1995 to objectively verify the performance of technologies that measure / monitor the quality of our environment, both for background or at suspected contamination site. The ETV program has established...
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
NASA Astrophysics Data System (ADS)
Dartevelle, S.
2006-12-01
Large-scale volcanic eruptions are inherently hazardous events, hence cannot be described by detailed and accurate in situ measurements; hence, volcanic explosive phenomenology is inadequately constrained in terms of initial and inflow conditions. Consequently, little to no real-time data exist to Verify and Validate computer codes developed to model these geophysical events as a whole. However, code Verification and Validation remains a necessary step, particularly when volcanologists use numerical data for mitigation of volcanic hazards as more often performed nowadays. The Verification and Validation (V&V) process formally assesses the level of 'credibility' of numerical results produced within a range of specific applications. The first step, Verification, is 'the process of determining that a model implementation accurately represents the conceptual description of the model', which requires either exact analytical solutions or highly accurate simplified experimental data. The second step, Validation, is 'the process of determining the degree to which a model is an accurate representation of the real world', which requires complex experimental data of the 'real world' physics. The Verification step is rather simple to formally achieve, while, in the 'real world' explosive volcanism context, the second step, Validation, is about impossible. Hence, instead of validating computer code against the whole large-scale unconstrained volcanic phenomenology, we rather suggest to focus on the key physics which control these volcanic clouds, viz., momentum-driven supersonic jets and multiphase turbulence. We propose to compare numerical results against a set of simple but well-constrained analog experiments, which uniquely and unambiguously represent these two key-phenomenology separately. Herewith, we use GMFIX (Geophysical Multiphase Flow with Interphase eXchange, v1.62), a set of multiphase- CFD FORTRAN codes, which have been recently redeveloped to meet the strict Quality Assurance, verification, and validation requirements from the Office of Civilian Radioactive Waste Management of the US Dept of Energy. GMFIX solves Navier-Stokes and energy partial differential equations for each phase with appropriate turbulence and interfacial coupling between phases. For momentum-driven single- to multi-phase underexpanded jets, the position of the first Mach disk is known empirically as a function of both the pressure ratio, K, and the particle mass fraction, Phi at the nozzle. Namely, the higher K, the further downstream the Mach disk and the higher Phi, the further upstream the first Mach disk. We show that GMFIX captures these two essential features. In addition, GMFIX displays all the properties found in these jets, such as expansion fans, incident and reflected shocks, and subsequent downstream mach discs, which make this code ideal for further investigations of equivalent volcanological phenomena. One of the other most challenging aspects of volcanic phenomenology is the multiphase nature of turbulence. We also validated GMFIX in comparing the velocity profiles and turbulence quantities against well constrained analog experiments. The velocity profiles agree with the analog ones as well as these of production of turbulent quantities. Overall, the Verification and the Validation experiments although inherently challenging suggest GMFIX captures the most essential dynamical properties of multiphase and supersonic flows and jets.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
DOT National Transportation Integrated Search
2016-12-01
The objective of this project is to find effective configurations for using buckling restrained braces (BRBs) in both skewed and curved bridges for reducing the effects of strong earthquakes. Verification is performed by numerical simulation using an...
Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results
NASA Astrophysics Data System (ADS)
Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.
2017-05-01
The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.
Verification of the Icarus Material Response Tool
NASA Technical Reports Server (NTRS)
Schroeder, Olivia; Palmer, Grant; Stern, Eric; Schulz, Joseph; Muppidi, Suman; Martin, Alexandre
2017-01-01
Due to the complex physics encountered during reentry, material response solvers are used for two main purposes: improve the understanding of the physical phenomena; and design and size thermal protection systems (TPS). Icarus, is a three dimensional, unstructured material response tool that is intended to be used for design while maintaining the flexibility to easily implement physical models as needed. Because TPS selection and sizing is critical, it is of the utmost importance that the design tools be extensively verified and validated before their use. Verification tests aim at insuring that the numerical schemes and equations are implemented correctly by comparison to analytical solutions and grid convergence tests.
Satake, S; Park, J-K; Sugama, H; Kanno, R
2011-07-29
Neoclassical toroidal viscosities (NTVs) in tokamaks are investigated using a δf Monte Carlo simulation, and are successfully verified with a combined analytic theory over a wide range of collisionality. A Monte Carlo simulation has been required in the study of NTV since the complexities in guiding-center orbits of particles and their collisions cannot be fully investigated by any means of analytic theories alone. Results yielded the details of the complex NTV dependency on particle precessions and collisions, which were predicted roughly in a combined analytic theory. Both numerical and analytic methods can be utilized and extended based on these successful verifications.
Verification of chemistry reference ranges using a simple method in sub-Saharan Africa.
De Baetselier, Irith; Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania
2016-01-01
Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. We report on an easy and cost-saving method to verify CRRs. Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory.
Code of Federal Regulations, 2012 CFR
2012-01-01
... check includes, at a minimum, a Federal Bureau of Investigation (FBI) criminal history records check (including verification of identity based on fingerprinting), employment history, education, and personal... fingerprinting and criminal history records checks before granting access to Safeguards Information. A background...
78 FR 39338 - Importer of Controlled Substances; Notice of Registration; Catalent CTS., Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-01
... plans to import an ointment for the treatment of wounds, which contains trace amounts of the controlled..., verification of the company's compliance with state and local laws, and a review of the company's background...
The Multiple Doppler Radar Workshop, November 1979.
NASA Astrophysics Data System (ADS)
Carbone, R. E.; Harris, F. I.; Hildebrand, P. H.; Kropfli, R. A.; Miller, L. J.; Moninger, W.; Strauch, R. G.; Doviak, R. J.; Johnson, K. W.; Nelson, S. P.; Ray, P. S.; Gilet, M.
1980-10-01
The findings of the Multiple Doppler Radar Workshop are summarized by a series of six papers. Part I of this series briefly reviews the history of multiple Doppler experimentation, fundamental concepts of Doppler signal theory, and organization and objectives of the Workshop. Invited presentations by dynamicists and cloud physicists are also summarized.Experimental design and procedures (Part II) are shown to be of critical importance. Well-defined and limited experimental objectives are necessary in view of technological limitations. Specified radar scanning procedures that balance temporal and spatial resolution considerations are discussed in detail. Improved siting for suppression of ground clutter as well as scanning procedures to minimize errors at echo boundaries are discussed. The need for accelerated research using numerically simulated proxy data sets is emphasized.New technology to eliminate various sampling limitations is cited as an eventual solution to many current problems in Part III. Ground clutter contamination may be curtailed by means of full spectral processing, digital filters in real time, and/or variable pulse repetition frequency. Range and velocity ambiguities also may be minimized by various pulsing options as well as random phase transmission. Sidelobe contamination can be reduced through improvements in radomes, illumination patterns, and antenna feed types. Radar volume-scan time can be sharply reduced by means of wideband transmission, phased array antennas, multiple beam antennas, and frequency agility.Part IV deals with synthesis of data from several radars in the context of scientific requirements in cumulus clouds, widespread precipitation, and severe convective storms. The important temporal and spatial scales are examined together with the accuracy required for vertical air motion in each phenomenon. Factors that introduce errors in the vertical velocity field are identified and synthesis techniques are discussed separately for the dual Doppler and multiple Doppler cases. Various filters and techniques, including statistical and variational approaches, are mentioned. Emphasis is placed on the importance of experiment design and procedures, technological improvements, incorporation of all information from supporting sensors, and analysis priority for physically simple cases. Integrated reliability is proposed as an objective tool for radar siting.Verification of multiple Doppler-derived vertical velocity is discussed in Part V. Three categories of verification are defined as direct, deductive, and theoretical/numerical. Direct verification consists of zenith-pointing radar measurements (from either airborne or ground-based systems), air motion sensing aircraft, instrumented towers, and tracking of radar chaff. Deductive sources include mesonetworks, aircraft (thermodynamic and microphysical) measurements, satellite observations, radar reflectivity, multiple Doppler consistency, and atmospheric soundings. Theoretical/numerical sources of verification include proxy data simulation, momentum checking, and numerical cloud models. New technology, principally in the form of wide bandwidth radars, is seen as a development that may reduce the need for extensive verification of multiple Doppler-derived vertical air motions. Airborne Doppler radar is perceived as the single most important source of verification within the bounds of existing technology.Nine stages of data processing and display are identified in Part VI. The stages are identified as field checks, archival, selection, editing, coordinate transformation, synthesis of Cartesian fields, filtering, display, and physical analysis. Display of data is considered to be a problem critical to assimilation of data at all stages. Interactive computing systems and software are concluded to be very important, particularly for the editing stage. Three- and 4-dimensional displays are considered essential for data assimilation, particularly at the physical analysis stage. The concept of common data tape formats is approved both for data in radar spherical space as well as for synthesized Cartesian output.1169
Horseshoes in a Chaotic System with Only One Stable Equilibrium
NASA Astrophysics Data System (ADS)
Huan, Songmei; Li, Qingdu; Yang, Xiao-Song
To confirm the numerically demonstrated chaotic behavior in a chaotic system with only one stable equilibrium reported by Wang and Chen, we resort to Poincaré map technique and present a rigorous computer-assisted verification of horseshoe chaos by virtue of topological horseshoes theory.
Numerical simulation of an elastic structure behavior under transient fluid flow excitation
NASA Astrophysics Data System (ADS)
Afanasyeva, Irina N.; Lantsova, Irina Yu.
2017-01-01
This paper deals with the verification of a numerical technique of modeling fluid-structure interaction (FSI) problems. The configuration consists of incompressible viscous fluid around an elastic structure in the channel. External flow is laminar. Multivariate calculations are performed using special software ANSYS CFX and ANSYS Mechanical. Different types of parameters of mesh deformation and solver controls (time step, under relaxation factor, number of iterations at coupling step) were tested. The results are presented in tables and plots in comparison with reference data.
Modelling crystal growth: Convection in an asymmetrically heated ampoule
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Rosenberger, Franz; Pulicani, J. P.; Krukowski, S.; Ouazzani, Jalil
1990-01-01
The objective was to develop and implement a numerical method capable of solving the nonlinear partial differential equations governing heat, mass, and momentum transfer in a 3-D cylindrical geometry in order to examine the character of convection in an asymmetrically heated cylindrical ampoule. The details of the numerical method, including verification tests involving comparison with results obtained from other methods, are presented. The results of the study of 3-D convection in an asymmetrically heated cylinder are described.
Prakash, Varuna; Koczmara, Christine; Savage, Pamela; Trip, Katherine; Stewart, Janice; McCurdie, Tara; Cafazzo, Joseph A; Trbovich, Patricia
2014-01-01
Background Nurses are frequently interrupted during medication verification and administration; however, few interventions exist to mitigate resulting errors, and the impact of these interventions on medication safety is poorly understood. Objective The study objectives were to (A) assess the effects of interruptions on medication verification and administration errors, and (B) design and test the effectiveness of targeted interventions at reducing these errors. Methods The study focused on medication verification and administration in an ambulatory chemotherapy setting. A simulation laboratory experiment was conducted to determine interruption-related error rates during specific medication verification and administration tasks. Interventions to reduce these errors were developed through a participatory design process, and their error reduction effectiveness was assessed through a postintervention experiment. Results Significantly more nurses committed medication errors when interrupted than when uninterrupted. With use of interventions when interrupted, significantly fewer nurses made errors in verifying medication volumes contained in syringes (16/18; 89% preintervention error rate vs 11/19; 58% postintervention error rate; p=0.038; Fisher's exact test) and programmed in ambulatory pumps (17/18; 94% preintervention vs 11/19; 58% postintervention; p=0.012). The rate of error commission significantly decreased with use of interventions when interrupted during intravenous push (16/18; 89% preintervention vs 6/19; 32% postintervention; p=0.017) and pump programming (7/18; 39% preintervention vs 1/19; 5% postintervention; p=0.017). No statistically significant differences were observed for other medication verification tasks. Conclusions Interruptions can lead to medication verification and administration errors. Interventions were highly effective at reducing unanticipated errors of commission in medication administration tasks, but showed mixed effectiveness at reducing predictable errors of detection in medication verification tasks. These findings can be generalised and adapted to mitigate interruption-related errors in other settings where medication verification and administration are required. PMID:24906806
Kugler, Günter; 't Hart, Bernard M.; Kohlbecher, Stefan; Bartl, Klaus; Schumann, Frank; Einhäuser, Wolfgang; Schneider, Erich
2015-01-01
Background: People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. Methods: A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Results: Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. Conclusions: In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency. PMID:26733851
NASA Astrophysics Data System (ADS)
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.
2017-06-01
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data
NASA Technical Reports Server (NTRS)
Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.
2004-01-01
A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.
Theory verification and numerical benchmarking on neoclassical toroidal viscosity
NASA Astrophysics Data System (ADS)
Wang, Z. R.; Park, J.-K.; Liu, Y. Q.; Logan, N. C.; Menard, J. E.
2013-10-01
Systematic verification and numerical benchmarking has been successfully carried out among three different approaches of neoclassical toroidal viscosity (NTV) theory and the corresponding codes: IPEC-PENT is developed based on the combined NTV theory but without geometric simplifications; MARS-K originally calculating the kinetic energy is upgraded to calculate the NTV torque based on the equivalence between kinetic energy and NTV torque; MARS-Q includes smoothly connected NTV formula. The derivation and numerical results both indicate that the imaginary part of kinetic energy calculated by MARS-K is equivalent to the NTV torque in IPEC-PENT. In the benchmark of precession resonance between MARS-Q and MARS-K/IPEC-PENT, it is first time to show the agreement and the correlation between the connected NTV formula and the combined NTV theory in different collisional region. Additionally, both IPEC-PENT and MARS-K indicates the importance of the bounce harmonic resonance which could greatly enhance the NTV torque when E cross B drift frequency reaches the bounce resonance condition. Since MARS-K also has the capability to calculate the plasma response including the kinetic effect self-consistently, the self-consistent NTV torque calculations have also been tested. This work is supported by DOE Contract No. DE-AC02-09CH11466.
Simulating flow around scaled model of a hypersonic vehicle in wind tunnel
NASA Astrophysics Data System (ADS)
Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.
2016-11-01
A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.
A numerical study of axisymmetric compressible non-isothermal and reactive swirling flow
NASA Astrophysics Data System (ADS)
Tavernetti, William E.; Hafez, Mohamed M.
2017-09-01
Non-linear dynamical phenomena in combustion processes is an active area of experimental and theoretical research. This is in large part due to increasingly strict environmental pressures to make gas turbine engines and industrial burners more efficient. Using numerical methods, for steady and unsteady confined and unconfined compressible flow, this study examines the modeling influence of compressibility for axisymmetric swirling flow. The compressible reactive Navier-Stokes equations in terms of stream function, vorticity, circulation are used. Results, details of the numerical algorithms, as well as numerical verification techniques and validation with sources from the literature will be presented. Understanding how vortex breakdown phenomena are affected by modeling reactant consumption with compressibility effect is the main goal of this study.
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
Energy- and time-resolved detection of prompt gamma-rays for proton range verification.
Verburg, Joost M; Riley, Kent; Bortfeld, Thomas; Seco, Joao
2013-10-21
In this work, we present experimental results of a novel prompt gamma-ray detector for proton beam range verification. The detection system features an actively shielded cerium-doped lanthanum(III) bromide scintillator, coupled to a digital data acquisition system. The acquisition was synchronized to the cyclotron radio frequency to separate the prompt gamma-ray signals from the later-arriving neutron-induced background. We designed the detector to provide a high energy resolution and an effective reduction of background events, enabling discrete proton-induced prompt gamma lines to be resolved. Measuring discrete prompt gamma lines has several benefits for range verification. As the discrete energies correspond to specific nuclear transitions, the magnitudes of the different gamma lines have unique correlations with the proton energy and can be directly related to nuclear reaction cross sections. The quantification of discrete gamma lines also enables elemental analysis of tissue in the beam path, providing a better prediction of prompt gamma-ray yields. We present the results of experiments in which a water phantom was irradiated with proton pencil-beams in a clinical proton therapy gantry. A slit collimator was used to collimate the prompt gamma-rays, and measurements were performed at 27 positions along the path of proton beams with ranges of 9, 16 and 23 g cm(-2) in water. The magnitudes of discrete gamma lines at 4.44, 5.2 and 6.13 MeV were quantified. The prompt gamma lines were found to be clearly resolved in dimensions of energy and time, and had a reproducible correlation with the proton depth-dose curve. We conclude that the measurement of discrete prompt gamma-rays for in vivo range verification of clinical proton beams is feasible, and plan to further study methods and detector designs for clinical use.
Dix, Annika; van der Meer, Elke
2015-04-01
This study investigates cognitive resource allocation dependent on fluid and numerical intelligence in arithmetic/algebraic tasks varying in difficulty. Sixty-six 11th grade students participated in a mathematical verification paradigm, while pupil dilation as a measure of resource allocation was collected. Students with high fluid intelligence solved the tasks faster and more accurately than those with average fluid intelligence, as did students with high compared to average numerical intelligence. However, fluid intelligence sped up response times only in students with average but not high numerical intelligence. Further, high fluid but not numerical intelligence led to greater task-related pupil dilation. We assume that fluid intelligence serves as a domain-general resource that helps to tackle problems for which domain-specific knowledge (numerical intelligence) is missing. The allocation of this resource can be measured by pupil dilation. Copyright © 2014 Society for Psychophysiological Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkin, V. G.; Lionello, R.; Linker, J.
2016-11-01
Two well-established magnetohydrodynamic (MHD) codes are coupled to model the solar corona and the inner heliosphere. The corona is simulated using the MHD algorithm outside a sphere (MAS) model. The Lyon–Fedder–Mobarry (LFM) model is used in the heliosphere. The interface between the models is placed in a spherical shell above the critical point and allows both models to work in either a rotating or an inertial frame. Numerical tests are presented examining the coupled model solutions from 20 to 50 solar radii. The heliospheric simulations are run with both LFM and the MAS extension into the heliosphere, and use themore » same polytropic coronal MAS solutions as the inner boundary condition. The coronal simulations are performed for idealized magnetic configurations, with an out-of-equilibrium flux rope inserted into an axisymmetric background, with and without including the solar rotation. The temporal evolution at the inner boundary of the LFM and MAS solutions is shown to be nearly identical, as are the steady-state background solutions, prior to the insertion of the flux rope. However, after the coronal mass ejection has propagated through the significant portion of the simulation domain, the heliospheric solutions diverge. Additional simulations with different resolution are then performed and show that the MAS heliospheric solutions approach those of LFM when run with progressively higher resolution. Following these detailed tests, a more realistic simulation driven by the thermodynamic coronal MAS is presented, which includes solar rotation and an azimuthally asymmetric background and extends to the Earth’s orbit.« less
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.
NASA Technical Reports Server (NTRS)
Zenie, Alexandre; Luguern, Jean-Pierre
1987-01-01
The specification, verification, validation, and evaluation, which make up the different steps of the CS-PN software are outlined. The colored stochastic Petri net software is applied to a Wound/Wait protocol decomposable into two principal modules: request or couple (transaction, granule) treatment module and wound treatment module. Each module is specified, verified, validated, and then evaluated separately, to deduce a verification, validation and evaluation of the complete protocol. The colored stochastic Petri nets tool is shown to be a natural extension of the stochastic tool, adapted to distributed systems and protocols, because the color conveniently takes into account the numerous sites, transactions, granules and messages.
NASA Astrophysics Data System (ADS)
Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping
2016-11-01
Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.
Delamination Assessment Tool for Spacecraft Composite Structures
NASA Astrophysics Data System (ADS)
Portela, Pedro; Preller, Fabian; Wittke, Henrik; Sinnema, Gerben; Camanho, Pedro; Turon, Albert
2012-07-01
Fortunately only few cases are known where failure of spacecraft structures due to undetected damage has resulted in a loss of spacecraft and launcher mission. However, several problems related to damage tolerance and in particular delamination of composite materials have been encountered during structure development of various ESA projects and qualification testing. To avoid such costly failures during development, launch or service of spacecraft, launcher and reusable launch vehicles structures a comprehensive damage tolerance verification approach is needed. In 2009, the European Space Agency (ESA) initiated an activity called “Delamination Assessment Tool” which is led by the Portuguese company HPS Lda and includes academic and industrial partners. The goal of this study is the development of a comprehensive damage tolerance verification approach for launcher and reusable launch vehicles (RLV) structures, addressing analytical and numerical methodologies, material-, subcomponent- and component testing, as well as non-destructive inspection. The study includes a comprehensive review of current industrial damage tolerance practice resulting from ECSS and NASA standards, the development of new Best Practice Guidelines for analysis, test and inspection methods and the validation of these with a real industrial case study. The paper describes the main findings of this activity so far and presents a first iteration of a Damage Tolerance Verification Approach, which includes the introduction of novel analytical and numerical tools at an industrial level. This new approach is being put to the test using real industrial case studies provided by the industrial partners, MT Aerospace, RUAG Space and INVENT GmbH
NASA Astrophysics Data System (ADS)
Wernham, Denny; Ciapponi, Alessandra; Riede, Wolfgang; Allenspacher, Paul; Era, Fabio; D'Ottavi, Alessandro; Thibault, Dominique
2016-12-01
The Aladin instrument will fly on the European Space Agency's ADM Aeolus satellite. The instrument is a Doppler wind LIDAR, primarily designed to measure global wind profiles to improve the accuracy of numerical weather prediction models. At the heart of the instrument is a frequency stabilized 355nm laser which will emit approximately 100mJ of energy in the form of 20ns pulses with a fluence around 1Jcm-2. The pulse repetition frequency is 50Hz meaning that Aladin will eventually have to accumulate 5Gshots over its 3 years planned lifetime in orbit. Due to anomalies that have occurred on previous spaceborne lasers, as well as a number of failures that we have observed in previous tests, an extensive development and verification campaign was undertaken in order to ensure that the Aladin instrument is robust enough to survive the mission. In this paper, we shall report the logic and the results of this verification campaign.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, Tim; Stagich, Brooke
The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their updated “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides PRGs for radionuclides that are used as a screening tool at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) sites. These risk-based PRGs establish concentration limits under specific exposure scenarios. The purpose of this verification study is to determine that the calculator has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly. There are 167 equations used inmore » the calculator. To verify the calculator, all equations for each of seven receptor types (resident, construction worker, outdoor and indoor worker, recreator, farmer, and composite worker) were hand calculated using the default parameters. The same four radionuclides (Am-241, Co-60, H-3, and Pu-238) were used for each calculation for consistency throughout.« less
EPA'S ENVIRONMENTAL TECHNOLOGY VERIFICATION PROGRAM: RAISING CONFIDENCE IN INNOVATION
This is a general article on the ETV Program which is being submitted to EM, the Air & Waste Management Association's (A&WMA's) monthly magazine. In addition to background on the program, some of its accomplishments, and organization, the article briefly addresses different veri...
Thomson's Theorem of Electrostatics: Its Applications and Mathematical Verification
ERIC Educational Resources Information Center
Bakhoum, Ezzat G.
2008-01-01
A 100 years-old formula that was given by J. J. Thomson recently found numerous applications in computational electrostatics and electromagnetics. Thomson himself never gave a proof for the formula; but a proof based on Differential Geometry was suggested by Jackson and later published by Pappas. Unfortunately, Differential Geometry, being a…
Conservation of Mechanical and Electric Energy: Simple Experimental Verification
ERIC Educational Resources Information Center
Ponikvar, D.; Planinsic, G.
2009-01-01
Two similar experiments on conservation of energy and transformation of mechanical into electrical energy are presented. Both can be used in classes, as they offer numerous possibilities for discussion with students and are simple to perform. Results are presented and are precise within 20% for the version of the experiment where measured values…
Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility
NASA Astrophysics Data System (ADS)
Tuba, Zoltán; Bottyán, Zsolt
2018-04-01
Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
CaveMan Enterprise version 1.0 Software Validation and Verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, David
The U.S. Department of Energy Strategic Petroleum Reserve stores crude oil in caverns solution-mined in salt domes along the Gulf Coast of Louisiana and Texas. The CaveMan software program has been used since the late 1990s as one tool to analyze pressure mea- surements monitored at each cavern. The purpose of this monitoring is to catch potential cavern integrity issues as soon as possible. The CaveMan software was written in Microsoft Visual Basic, and embedded in a Microsoft Excel workbook; this method of running the CaveMan software is no longer sustainable. As such, a new version called CaveMan Enter- prisemore » has been developed. CaveMan Enterprise version 1.0 does not have any changes to the CaveMan numerical models. CaveMan Enterprise represents, instead, a change from desktop-managed work- books to an enterprise framework, moving data management into coordinated databases and porting the numerical modeling codes into the Python programming language. This document provides a report of the code validation and verification testing.« less
Extension of a System Level Tool for Component Level Analysis
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul
2002-01-01
This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.
Extension of a System Level Tool for Component Level Analysis
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Schallhorn, Paul; McConnaughey, Paul K. (Technical Monitor)
2001-01-01
This paper presents an extension of a numerical algorithm for network flow analysis code to perform multi-dimensional flow calculation. The one dimensional momentum equation in network flow analysis code has been extended to include momentum transport due to shear stress and transverse component of velocity. Both laminar and turbulent flows are considered. Turbulence is represented by Prandtl's mixing length hypothesis. Three classical examples (Poiseuille flow, Couette flow, and shear driven flow in a rectangular cavity) are presented as benchmark for the verification of the numerical scheme.
Verification of a three-dimensional viscous flow analysis for a single stage compressor
NASA Astrophysics Data System (ADS)
Matsuoka, Akinori; Hashimoto, Keisuke; Nozaki, Osamu; Kikuchi, Kazuo; Fukuda, Masahiro; Tamura, Atsuhiro
1992-12-01
A transonic flowfield around rotor blades of a highly loaded single stage axial compressor was numerically analyzed by a three dimensional compressible Navier-Stokes equation code using Chakravarthy and Osher type total variation diminishing (TVD) scheme. A stage analysis which calculates both flowfields around inlet guide vane (IGV) and rotor blades simultaneously was carried out. Comparing with design values and experimental data, computed results show slight difference quantitatively. But the numerical calculation simulates well the pressure rise characteristics of the compressor and its flow pattern including strong shock surface.
Numerical computation of orbits and rigorous verification of existence of snapback repellers.
Peng, Chen-Chang
2007-03-01
In this paper we show how analysis from numerical computation of orbits can be applied to prove the existence of snapback repellers in discrete dynamical systems. That is, we present a computer-assisted method to prove the existence of a snapback repeller of a specific map. The existence of a snapback repeller of a dynamical system implies that it has chaotic behavior [F. R. Marotto, J. Math. Anal. Appl. 63, 199 (1978)]. The method is applied to the logistic map and the discrete predator-prey system.
Optimal placement of excitations and sensors for verification of large dynamical systems
NASA Technical Reports Server (NTRS)
Salama, M.; Rose, T.; Garba, J.
1987-01-01
The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.
A Millimetre-Wave Cuboid Solid Immersion Lens with Intensity-Enhanced Amplitude Mask Apodization
NASA Astrophysics Data System (ADS)
Yue, Liyang; Yan, Bing; Monks, James N.; Dhama, Rakesh; Wang, Zengbo; Minin, Oleg V.; Minin, Igor V.
2018-06-01
Photonic jet is a narrow, highly intensive, weak-diverging beam propagating into a background medium and can be produced by a cuboid solid immersion lens (SIL) in both transmission and reflection modes. Amplitude mask apodization is an optical method to further improve the spatial resolution of a SIL imaging system via reduction of waist size of photonic jet, but always leading to intensity loss due to central masking of the incoming plane wave. In this letter, we report a particularly sized millimetre-wave cuboid SIL with the intensity-enhanced amplitude mask apodization for the first time. It is able to simultaneously deliver extra intensity enhancement and waist narrowing to the produced photonic jet. Both numerical simulation and experimental verification of the intensity-enhanced apodization effect are demonstrated using a copper-masked Teflon cuboid SIL with 22-mm side length under radiation of a plane wave with 8-mm wavelength. Peak intensity enhancement and the lateral resolution of the optical system increase by about 36.0% and 36.4% in this approach, respectively.
NASA Technical Reports Server (NTRS)
Bune, Andris V.; Gillies, Donald C.; Lehozky, Sandor L.
1997-01-01
A numerical model of HgCdTe solidification was implemented using finite the element code FIDAP. Model verification was done using both experimental data and numerical test problems. The model was used to evaluate possible effects of double-diffusion convection in molten material, and microgravity level on concentration distribution in the solidified HgCdTe. Particular attention was paid to incorporation of HgCdTe phase diagram. It was found, that below a critical microgravity amplitude, the maximum convective velocity in the melt appears virtually independent on the microgravity vector orientation. Good agreement between predicted interface shape and an interface obtained experimentally by quenching was achieved. The results of numerical modeling are presented in the form of video film.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...
2017-03-23
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Croce Ferri, Lucilla
2003-06-01
Our paper addresses two issues of a biometric authentication algorithm for ID cardholders previously presented namely the security of the embedded reference data and the aging process of the biometric data. We describe a protocol that allows two levels of verification, combining a biometric hash technique based on handwritten signature and hologram watermarks with cryptographic signatures in a verification infrastructure. This infrastructure consists of a Trusted Central Public Authority (TCPA), which serves numerous Enrollment Stations (ES) in a secure environment. Each individual performs an enrollment at an ES, which provides the TCPA with the full biometric reference data and a document hash. The TCPA then calculates the authentication record (AR) with the biometric hash, a validity timestamp, and a document hash provided by the ES. The AR is then signed with a cryptographic signature function, initialized with the TCPA's private key and embedded in the ID card as a watermark. Authentication is performed at Verification Stations (VS), where the ID card will be scanned and the signed AR is retrieved from the watermark. Due to the timestamp mechanism and a two level biometric verification technique based on offline and online features, the AR can deal with the aging process of the biometric feature by forcing a re-enrollment of the user after expiry, making use of the ES infrastructure. We describe some attack scenarios and we illustrate the watermarking embedding, retrieval and dispute protocols, analyzing their requisites, advantages and disadvantages in relation to security requirements.
RELAP5-3D Resolution of Known Restart/Backup Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesina, George L.; Anderson, Nolan A.
2014-12-01
The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less
SIVEH: numerical computing simulation of wireless energy-harvesting sensor nodes.
Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael
2013-09-04
The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I-V for EH), based on I-V hardware tracking. I-V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time-days, weeks, months or years-using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.
SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes
Sanchez, Antonio; Blanc, Sara; Climent, Salvador; Yuste, Pedro; Ors, Rafael
2013-01-01
The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH), based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach. PMID:24008287
Background feature descriptor for offline handwritten numeral recognition
NASA Astrophysics Data System (ADS)
Ming, Delie; Wang, Hao; Tian, Tian; Jie, Feiran; Lei, Bo
2011-11-01
This paper puts forward an offline handwritten numeral recognition method based on background structural descriptor (sixteen-value numerical background expression). Through encoding the background pixels in the image according to a certain rule, 16 different eigenvalues were generated, which reflected the background condition of every digit, then reflected the structural features of the digits. Through pattern language description of images by these features, automatic segmentation of overlapping digits and numeral recognition can be realized. This method is characterized by great deformation resistant ability, high recognition speed and easy realization. Finally, the experimental results and conclusions are presented. The experimental results of recognizing datasets from various practical application fields reflect that with this method, a good recognition effect can be achieved.
Verification of NWP Cloud Properties using A-Train Satellite Observations
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Weeks, C.; Wolff, C.; Bullock, R.; Brown, B.
2011-12-01
Recently, the NCAR Model Evaluation Tools (MET) has been enhanced to incorporate satellite observations for the verification of Numerical Weather Prediction (NWP) cloud products. We have developed tools that match fields spatially (both in the vertical and horizontal dimensions) to compare NWP products with satellite observations. These matched fields provide diagnostic evaluation of cloud macro attributes such as vertical distribution of clouds, cloud top height, and the spatial and seasonal distribution of cloud fields. For this research study, we have focused on using CloudSat, CALIPSO, and MODIS observations to evaluate cloud fields for a variety of NWP fields and derived products. We have selected cases ranging from large, mid-latitude synoptic systems to well-organized tropical cyclones. For each case, we matched the observed cloud field with gridded model and/or derived product fields. CloudSat and CALIPSO observations and model fields were matched and compared in the vertical along the orbit track. MODIS data and model fields were matched and compared in the horizontal. We then use MET to compute the verification statistics to quantify the performance of the models in representing the cloud fields. In this presentation we will give a summary of our comparison and show verification results for both synoptic and tropical cyclone cases.
Development of analysis technique to predict the material behavior of blowing agent
NASA Astrophysics Data System (ADS)
Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo
2014-11-01
In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
NASA Astrophysics Data System (ADS)
Kawamori, E.; Igami, H.
2017-11-01
A diagnostic technique for detecting the wave numbers of electron density fluctuations at electron gyro-scales in an electron cyclotron frequency range is proposed, and the validity of the idea is checked by means of a particle-in-cell (PIC) numerical simulation. The technique is a modified version of the scattering technique invented by Novik et al. [Plasma Phys. Controlled Fusion 36, 357-381 (1994)] and Gusakov et al., [Plasma Phys. Controlled Fusion 41, 899-912 (1999)]. The novel method adopts forward scattering of injected extraordinary probe waves at the upper hybrid resonance layer instead of the backward-scattering adopted by the original method, enabling the measurement of the wave-numbers of the fine scale density fluctuations in the electron-cyclotron frequency band by means of phase measurement of the scattered waves. The verification numerical simulation with the PIC method shows that the technique has a potential to be applicable to the detection of electron gyro-scale fluctuations in laboratory plasmas if the upper-hybrid resonance layer is accessible to the probe wave. The technique is a suitable means to detect electron Bernstein waves excited via linear mode conversion from electromagnetic waves in torus plasma experiments. Through the numerical simulations, some problems that remain to be resolved are revealed, which include the influence of nonlinear processes such as the parametric decay instability of the probe wave in the scattering process, and so on.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2017-04-01
This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter.
NASA Astrophysics Data System (ADS)
Ringbom, A.
2010-12-01
A detailed knowledge of both the spatial and isotopic distribution of anthropogenic radioxenon is essential in investigations of the performance of the radioxenon part of the IMS, as well as in the development of techniques to discriminate radioxenon signatures from a nuclear explosion from other sources. Further, the production processes in the facilities causing the radioxenon background has to be understood and be compatible with simulations. In this work, several aspects of the observed atmospheric radioxenon background are investigated, including the global distribution as well as the current understanding of the observed isotopic ratios. Analyzed radioxenon data from the IMS, as well as from other measurement stations, are used to create an up-to-date description of the global radioxenon background, including all four CTBT relevant xenon isotopes (133Xe, 131mXe, 133mXe, and 135Xe). In addition, measured isotopic ratios will be compared to simulations of neutron induced fission of 235U, and the uncertainties will be discussed. Finally, the impact of the radioxenon background on the detection capability of the IMS will be investigated. This work is a continuation of studies [1,2] that was presented at the International Scientific Studies conference held in Vienna in 2009. [1] A. Ringbom, et.al., “Characterization of the global distribution of atmospheric radioxenons”, International Scientific Studies Conference on CTBT Verification, 10-12 June 2009. [2] R. D'Amours and A. Ringbom, “A study on the global detection capability of IMS for all CTBT relevant xenon isotopes“, International Scientific Studies Conference on CTBT Verification, 10-12 June 2009.
32 CFR Attachment B to Subpart B... - Standard B-Single Scope Background Investigation (SSBI)
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Employment: Verification of all employments for the past seven years; personal interviews of sources... most recent or most significant claimed attendance, degree, or diploma. Interviews of appropriate... of the subject and collectively span at least the last seven years. (9) Former Spouse: An interview...
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Faster and more accurate transport procedures for HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-01-01
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970
Kim, Byeong Hak; Kim, Min Young; Chae, You Seong
2017-12-27
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.
NASA Astrophysics Data System (ADS)
Roed-Larsen, Trygve; Flach, Todd
The purpose of this chapter is to provide a review of existing national and international requirements for verification of greenhouse gas reductions and associated accreditation of independent verifiers. The credibility of results claimed to reduce or remove anthropogenic emissions of greenhouse gases (GHG) is of utmost importance for the success of emerging schemes to reduce such emissions. Requirements include transparency, accuracy, consistency, and completeness of the GHG data. The many independent verification processes that have developed recently now make up a quite elaborate tool kit for best practices. The UN Framework Convention for Climate Change and the Kyoto Protocol specifications for project mechanisms initiated this work, but other national and international actors also work intensely with these issues. One initiative gaining wide application is that taken by the World Business Council for Sustainable Development with the World Resources Institute to develop a "GHG Protocol" to assist companies in arranging for auditable monitoring and reporting processes of their GHG activities. A set of new international standards developed by the International Organization for Standardization (ISO) provides specifications for the quantification, monitoring, and reporting of company entity and project-based activities. The ISO is also developing specifications for recognizing independent GHG verifiers. This chapter covers this background with intent of providing a common understanding of all efforts undertaken in different parts of the world to secure the reliability of GHG emission reduction and removal activities. These verification schemes may provide valuable input to current efforts of securing a comprehensive, trustworthy, and robust framework for verification activities of CO2 capture, transport, and storage.
The Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) project
NASA Astrophysics Data System (ADS)
Barnes, D.; Harrison, R. A.; Davies, J. A.; Perry, C. H.; Moestl, C.; Rouillard, A.; Bothmer, V.; Rodriguez, L.; Eastwood, J. P.; Kilpua, E.; Gallagher, P.; Odstrcil, D.
2017-12-01
Understanding solar wind evolution is fundamental to advancing our knowledge of energy and mass transport in the solar system, whilst also being crucial to space weather and its prediction. The advent of truly wide-angle heliospheric imaging has revolutionised the study of solar wind evolution, by enabling direct and continuous observation of both transient and background components of the solar wind as they propagate from the Sun to 1 AU and beyond. The recently completed, EU-funded FP7 Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS) project (1st May 2014 - 30th April 2017) combined European expertise in heliospheric imaging, built up over the last decade in particular through leadership of the Heliospheric Imager (HI) instruments aboard NASA's STEREO mission, with expertise in solar and coronal imaging as well as the interpretation of in-situ and radio diagnostic measurements of solar wind phenomena. HELCATS involved: (1) the cataloguing of transient (coronal mass ejections) and background (stream/corotating interaction regions) solar wind structures observed by the STEREO/HI instruments, including estimates of their kinematic properties based on a variety of modelling techniques; (2) the verification of these kinematic properties through comparison with solar source observations and in-situ measurements at multiple points throughout the heliosphere; (3) the assessment of the potential for initialising numerical models based on the derived kinematic properties of transient and background solar wind components; and (4) the assessment of the complementarity of radio observations (Type II radio bursts and interplanetary scintillation) in the detection and analysis of heliospheric structure in combination with heliospheric imaging observations. In this presentation, we provide an overview of the HELCATS project emphasising, in particular, the principal achievements and legacy of this unprecedented project.
Wave Number Selection for Incompressible Parallel Jet Flows Periodic in Space
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
1997-01-01
The temporal instability of a spatially periodic parallel flow of an incompressible inviscid fluid for various jet velocity profiles is studied numerically using Floquet Analysis. The transition matrix at the end of a period is evaluated by direct numerical integration. For verification, a method based on approximating a continuous function by a series of step functions was used. Unstable solutions were found only over a limited range of wave numbers and have a band type structure. The results obtained are analogous to the behavior observed in systems exhibiting complexity at the edge of order and chaos.
Accurate green water loads calculation using naval hydro pack
NASA Astrophysics Data System (ADS)
Jasak, H.; Gatin, I.; Vukčević, V.
2017-12-01
An extensive verification and validation of Finite Volume based CFD software Naval Hydro based on foam-extend is presented in this paper for green water loads. Two-phase numerical model with advanced methods for treating the free surface is employed. Pressure loads on horizontal deck of Floating Production Storage and Offloading vessel (FPSO) model are compared to experimental results from [1] for three incident regular waves. Pressure peaks and integrals of pressure in time are measured on ten different locations on deck for each case. Pressure peaks and integrals are evaluated as average values among the measured incident wave periods, where periodic uncertainty is assessed for both numerical and experimental results. Spatial and temporal discretization refinement study is performed providing numerical discretization uncertainties.
A calibration method for patient specific IMRT QA using a single therapy verification film
Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.
2013-01-01
Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558
A comparative verification of high resolution precipitation forecasts using model output statistics
NASA Astrophysics Data System (ADS)
van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees
2017-04-01
Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.
NASA Astrophysics Data System (ADS)
Lin, Y. Q.; Ren, W. X.; Fang, S. E.
2011-11-01
Although most vibration-based damage detection methods can acquire satisfactory verification on analytical or numerical structures, most of them may encounter problems when applied to real-world structures under varying environments. The damage detection methods that directly extract damage features from the periodically sampled dynamic time history response measurements are desirable but relevant research and field application verification are still lacking. In this second part of a two-part paper, the robustness and performance of the statistics-based damage index using the forward innovation model by stochastic subspace identification of a vibrating structure proposed in the first part have been investigated against two prestressed reinforced concrete (RC) beams tested in the laboratory and a full-scale RC arch bridge tested in the field under varying environments. Experimental verification is focused on temperature effects. It is demonstrated that the proposed statistics-based damage index is insensitive to temperature variations but sensitive to the structural deterioration or state alteration. This makes it possible to detect the structural damage for the real-scale structures experiencing ambient excitations and varying environmental conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendt, Fabian F; Yu, Yi-Hsiang; Nielsen, Kim
This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 was proposed by Bob Thresher (National Renewable Energy Laboratory) in 2015 and approved by the OES Executive Committee EXCO in 2016. The kickoff workshop took place in September 2016, wherein the initial baseline task was defined. Experience from similar offshore wind validation/verification projects (OC3-OC5 conducted within the International Energy Agency Wind Task 30)more » [1], [2] showed that a simple test case would help the initial cooperation to present results in a comparable way. A heaving sphere was chosen as the first test case. The team of project participants simulated different numerical experiments, such as heave decay tests and regular and irregular wave cases. The simulation results are presented and discussed in this paper.« less
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
NASA Technical Reports Server (NTRS)
Pierzga, M. J.
1981-01-01
The experimental verification of an inviscid, incompressible through-flow analysis method is presented. The primary component of this method is an axisymmetric streamline curvature technique which is used to compute the hub-to-tip flow field of a given turbomachine. To analyze the flow field in the blade-to-blade plane of the machine, the potential flow solution of an infinite cascade of airfoils is also computed using a source model technique. To verify the accuracy of such an analysis method an extensive experimental verification investigation was conducted using an axial flow research fan. Detailed surveys of the blade-free regions of the machine along with intra-blade surveys using rotating pressure sensing probes and blade surface static pressure taps provide a one-to-one relationship between measured and predicted data. The results of this investigation indicate the ability of this inviscid analysis method to predict the design flow field of the axial flow fan test rotor to within a few percent of the measured values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y; Souri, S; Gill, G
Purpose: To statistically determine the optimal tolerance level in the verification of delivery dose compared to the planned dose in an in vivo dosimetry system in radiotherapy. Methods: The LANDAUER MicroSTARii dosimetry system with screened nanoDots (optically stimulated luminescence dosimeters) was used for in vivo dose measurements. Ideally, the measured dose should match with the planned dose and falls within a normal distribution. Any deviation from the normal distribution may be redeemed as a mismatch, therefore a potential sign of the dose misadministration. Randomly mis-positioned nanoDots can yield a continuum background distribution. A percentage difference of the measured dose tomore » its corresponding planned dose (ΔD) can be used to analyze combined data sets for different patients. A model of a Gaussian plus a flat function was used to fit the ΔD distribution. Results: Total 434 nanoDot measurements for breast cancer patients were collected across a period of three months. The fit yields a Gaussian mean of 2.9% and a standard deviation (SD) of 5.3%. The observed shift of the mean from zero is attributed to the machine output bias and calibration of the dosimetry system. A pass interval of −2SD to +2SD was applied and a mismatch background was estimated to be 4.8%. With such a tolerance level, one can expect that 99.99% of patients should pass the verification and at most 0.011% might have a potential dose misadministration that may not be detected after 3 times of repeated measurements. After implementation, a number of new start breast cancer patients were monitored and the measured pass rate is consistent with the model prediction. Conclusion: It is feasible to implement an optimal tolerance level in order to maintain a low limit of potential dose misadministration while still to keep a relatively high pass rate in radiotherapy delivery verification.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-14
... subsequent soil samples showed levels of metals at or below generic residential criteria or background values... 1994- 1996 and additional sampling between 1998 and 2007. Area A--Site Entrance: Soil boring samples... verification samples. Additional soil samples were collected from the same location as the previous collection...
Holographic particle size extraction by using Wigner-Ville distribution
NASA Astrophysics Data System (ADS)
Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki
2014-06-01
A new method for measuring object size from in-line holograms by using Wigner-Ville distribution (WVD) is proposed. The proposed method has advantages over conventional numerical reconstruction in that it is free from iterative process and it can extract the object size and position with only single computation of the WVD. Experimental verification of the proposed method is presented.
Markov Chains For Testing Redundant Software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
NASA Astrophysics Data System (ADS)
Vu, Minh Q.; Nguyen, Nga T. T.; Pham, Hien T. T.; Dang, Ngoc T.
2018-03-01
High-altitude platforms (HAPs) are flexible, non-pollutant and cost-effective infrastructures compared to satellite or old terrestrial systems. They are being researched and developed widely in Europe, USA, Japan, Korea, and so on. However, the current limited data rates and the overload of radio frequency (RF) spectrum are problems which the developers for HAPs are confronting because most of them use RF links to communicate with the ground stations (GSs) or each other. In this paper, we propose an all-optical two-way half-duplex relaying free-space optical (FSO) communication for HAP-based backhaul networks, which connect the base transceiver station (BTS) to the core network (CN) via a single HAP. Our proposed backhaul solution can be deployed quickly and flexibly for disaster relief and for serving users in both urban environments and remote areas. The key subsystem of HAP is an optical regenerate-and-forward (ORF) equipped with an optical hard-limiter (OHL) and an optical XOR gate to perform all-optical processing and help mitigate the background noise. In addition, two-way half-duplex relaying can be provided thanks to the use of network coding scheme. The closed-form expression for the bit error rate (BER) of our proposed system under the effect of path loss, atmospheric turbulence, and noise induced by the background light is formulated. The numerical results are demonstrated to prove the feasibility of our proposed system with the verification by using Monte-Carlo (M-C) simulations.
Seismic Safety Of Simple Masonry Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guadagnuolo, Mariateresa; Faella, Giuseppe
2008-07-08
Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less
Initial verification and validation of RAZORBACK - A research reactor transient analysis code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2015-09-01
This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less
Skill of Predicting Heavy Rainfall Over India: Improvement in Recent Years Using UKMO Global Model
NASA Astrophysics Data System (ADS)
Sharma, Kuldeep; Ashrit, Raghavendra; Bhatla, R.; Mitra, A. K.; Iyengar, G. R.; Rajagopal, E. N.
2017-11-01
The quantitative precipitation forecast (QPF) performance for heavy rains is still a challenge, even for the most advanced state-of-art high-resolution Numerical Weather Prediction (NWP) modeling systems. This study aims to evaluate the performance of UK Met Office Unified Model (UKMO) over India for prediction of high rainfall amounts (>2 and >5 cm/day) during the monsoon period (JJAS) from 2007 to 2015 in short range forecast up to Day 3. Among the various modeling upgrades and improvements in the parameterizations during this period, the model horizontal resolution has seen an improvement from 40 km in 2007 to 17 km in 2015. Skill of short range rainfall forecast has improved in UKMO model in recent years mainly due to increased horizontal and vertical resolution along with improved physics schemes. Categorical verification carried out using the four verification metrics, namely, probability of detection (POD), false alarm ratio (FAR), frequency bias (Bias) and Critical Success Index, indicates that QPF has improved by >29 and >24% in case of POD and FAR. Additionally, verification scores like EDS (Extreme Dependency Score), EDI (Extremal Dependence Index) and SEDI (Symmetric EDI) are used with special emphasis on verification of extreme and rare rainfall events. These scores also show an improvement by 60% (EDS) and >34% (EDI and SEDI) during the period of study, suggesting an improved skill of predicting heavy rains.
NASA Astrophysics Data System (ADS)
Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, U. N.
2017-11-01
The verification of the FENIA finite element code on some problems and an example of its application are presented in the paper. The code is being developing for 3D modelling of thermal, mechanical and hydrodynamical (THM) problems related to the functioning of deep geological repositories. Verification of the code for two analytical problems has been performed. The first one is point heat source with exponential heat decrease, the second one - linear heat source with similar behavior. Analytical solutions have been obtained by the authors. The problems have been chosen because they reflect the processes influencing the thermal state of deep geological repository of radioactive waste. Verification was performed for several meshes with different resolution. Good convergence between analytical and numerical solutions was achieved. The application of the FENIA code is illustrated by 3D modelling of thermal state of a prototypic deep geological repository of radioactive waste. The repository is designed for disposal of radioactive waste in a rock at depth of several hundred meters with no intention of later retrieval. Vitrified radioactive waste is placed in the containers, which are placed in vertical boreholes. The residual decay heat of radioactive waste leads to containers, engineered safety barriers and host rock heating. Maximum temperatures and corresponding times of their establishment have been determined.
New generalized Noh solutions for HEDP hydrocode verification
NASA Astrophysics Data System (ADS)
Velikovich, A. L.; Giuliani, J. L.; Zalesak, S. T.; Tangri, V.
2017-10-01
The classic Noh solution describing stagnation of a cold ideal gas in a strong accretion shock wave has been the workhorse of compressible hydrocode verification for over three decades. We describe a number of its generalizations available for HEDP code verification. First, for an ideal gas, we have obtained self-similar solutions that describe adiabatic convergence either of a finite-pressure gas into an empty cavity or of a finite-amplitude sound wave into a uniform resting gas surrounding the center or axis of symmetry. At the moment of collapse such a flow produces a uniform gas whose velocity at each point is constant and directed towards the axis or the center, i. e. the initial condition similar to the classic solution but with a finite pressure of the converging gas. After that, a constant-velocity accretion shock propagates into the incident gas whose pressure and velocity profiles are not flat, in contrast with the classic solution. Second, for an arbitrary equation of state, we demonstrate the existence of self-similar solutions of the Noh problem in cylindrical and spherical geometry. Examples of such solutions with a three-term equation of state that includes cold, thermal ion/lattice, and thermal electron contributions are presented for aluminum and copper. These analytic solutions are compared to our numerical simulation results as an example of their use for code verification. Work supported by the US DOE/NNSA.
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastelum, Zoe N.
This thesis is the culminating project for my participation in the OECD NEA International School of Nuclear Law. This paper will begin by providing a historical background to current disarmament and denuclearization treaties. This paper will discuss the current legal framework based on current and historical activities related to denuclearization and nuclear disarmament. Then, it will propose paths forward for the future efforts, and describe the necessary legal considerations. Each treaty or agreement will be examined in respect to its requirements for: 1) limitations and implementation; 2) and verification and monitoring. Then, lessons learned in each of the two areasmore » (limitations and verification) will be used to construct a proposed path forward at the end of this paper.« less
Analytical solutions for coagulation and condensation kinetics of composite particles
NASA Astrophysics Data System (ADS)
Piskunov, Vladimir N.
2013-04-01
The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.
NASA Astrophysics Data System (ADS)
Kagami, Hiroyuki
2007-05-01
We have proposed and modified a model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication and have presented the fruits through Photomask Japan 2002, 2003, 2004, Smart Materials, Nano-, and Micro-Smart Systems 2006 and so on. And for example numerical simulation of the model qualitatively reappears a typical thickness profile of the polymer film formed after drying, that is, the profile that the edge of the film is thicker and just the region next to the edge's bump is thinner. Then we have clarified dependence of distribution of polymer molecules on a flat substrate on a various parameters based on analysis of many numerical simulations. Then we did a few kinds of experiments so as to verify the modified model and reported the results of them through Photomask Japan 2005 and 2006. We could observe some results supporting the modified model. But we could not observe a characteristic region of a valley next to the edge's bump of a polymer film after drying. After some trial of various improved experiments we reached the conclusion that the characteristic region didn't appear by reason that water which vaporized slower than organic solvent was used as solvent. Then, in this study, we adopted organic solvent instead of water as solvent for experiments. As a result, that the characteristic region as mentioned above could be seen and we could verify the model more accurately. In this paper, we present verification of the model through above improved experiments for verification using organic solvent.
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Alak; Majumdar, Alok
2007-01-01
The present paper describes the verification and validation of a quasi one-dimensional pressure based finite volume algorithm, implemented in Generalized Fluid System Simulation Program (GFSSP), for predicting compressible flow with friction, heat transfer and area change. The numerical predictions were compared with two classical solutions of compressible flow, i.e. Fanno and Rayleigh flow. Fanno flow provides an analytical solution of compressible flow in a long slender pipe where incoming subsonic flow can be choked due to friction. On the other hand, Raleigh flow provides analytical solution of frictionless compressible flow with heat transfer where incoming subsonic flow can be choked at the outlet boundary with heat addition to the control volume. Nonuniform grid distribution improves the accuracy of numerical prediction. A benchmark numerical solution of compressible flow in a converging-diverging nozzle with friction and heat transfer has been developed to verify GFSSP's numerical predictions. The numerical predictions compare favorably in all cases.
NASA Astrophysics Data System (ADS)
Zhou, Tong; Zhao, Jian; He, Yong; Jiang, Bo; Su, Yan
2018-05-01
A novel self-adaptive background current compensation circuit applied to infrared focal plane array is proposed in this paper, which can compensate the background current generated in different conditions. Designed double-threshold detection strategy is to estimate and eliminate the background currents, which could significantly reduce the hardware overhead and improve the uniformity among different pixels. In addition, the circuit is well compatible to various categories of infrared thermo-sensitive materials. The testing results of a 4 × 4 experimental chip showed that the proposed circuit achieves high precision, wide application and high intelligence. Tape-out of the 320 × 240 readout circuit, as well as the bonding, encapsulation and imaging verification of uncooled infrared focal plane array, have also been completed.
Calculation of far-field scattering from nonspherical particles using a geometrical optics approach
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1991-01-01
A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.
Radiative flow of Carreau liquid in presence of Newtonian heating and chemical reaction
NASA Astrophysics Data System (ADS)
Hayat, T.; Ullah, Ikram; Ahmad, B.; Alsaedi, A.
Objective of this article is to investigate the magnetohydrodynamic (MHD) boundary layer stretched flow of Carreau fluid in the presence of Newtonian heating. Sheet is presumed permeable. Analysis is studied in the presence of chemical reaction and thermal radiation. Mathematical formulation is established by using the boundary layer approximations. The resultant nonlinear flow analysis is computed for the convergent solutions. Interval of convergence via numerical data and plots are obtained and verified. Impact of numerous pertinent variables on the velocity, temperature and concentration is outlined. Numerical data for surface drag coefficient, surface heat transfer (local Nusselt number) and mass transfer (local Sherwood number) is executed and inspected. Comparison of skin friction coefficient in limiting case is made for the verification of current derived solutions.
The 3-D numerical simulation research of vacuum injector for linear induction accelerator
NASA Astrophysics Data System (ADS)
Liu, Dagang; Xie, Mengjun; Tang, Xinbing; Liao, Shuqing
2017-01-01
Simulation method for voltage in-feed and electron injection of vacuum injector is given, and verification of the simulated voltage and current is carried out. The numerical simulation for the magnetic field of solenoid is implemented, and a comparative analysis is conducted between the simulation results and experimental results. A semi-implicit difference algorithm is adopted to suppress the numerical noise, and a parallel acceleration algorithm is used for increasing the computation speed. The RMS emittance calculation method of the beam envelope equations is analyzed. In addition, the simulated results of RMS emittance are compared with the experimental data. Finally, influences of the ferromagnetic rings on the radial and axial magnetic fields of solenoid as well as the emittance of beam are studied.
ERIC Educational Resources Information Center
Sandberg, Chaleece; Sebastian, Rajani; Kiran, Swathi
2012-01-01
Background: The typicality effect is present in neurologically intact populations for natural, ad-hoc, and well-defined categories. Although sparse, there is evidence of typicality effects in persons with chronic stroke aphasia for natural and ad-hoc categories. However, it is unknown exactly what influences the typicality effect in this…
2015-12-01
Verification Tool for Laser Environmental Effects Definition and Reference (LEEDR) Development ................................... 45 3.5 Gap Filling with NWP... effective cloud cover for all cloud layers within the AIRS field-of-view. ......................................... 59 Figure 37. Average wind...IR Infrared JPL Jet Propulsion Lab LEEDR Laser Environmental Effects Definition and Reference LIDAR Light Detection and Ranging MODIS Moderate
Alonso-González, P; Albella, P; Neubrech, F; Huck, C; Chen, J; Golmar, F; Casanova, F; Hueso, L E; Pucci, A; Aizpurua, J; Hillenbrand, R
2013-05-17
Theory predicts a distinct spectral shift between the near- and far-field optical response of plasmonic antennas. Here we combine near-field optical microscopy and far-field spectroscopy of individual infrared-resonant nanoantennas to verify experimentally this spectral shift. Numerical calculations corroborate our experimental results. We furthermore discuss the implications of this effect in surface-enhanced infrared spectroscopy.
Telemicrobiology for Mission Support in the Field of Infectious Diseases
2010-04-01
bacterial meningitis so that important additional verification was lacking. Microscopic diagnoses in the expert laboratory also rarely yield a...With bacterial infections, depending on the country of deployment, also unusual resistance behavior of the pathogens will occur because numerous...missions of the US Forces in the recent years, well-documented with respect to epidemiology , the weekly incidence of infectious diseases was always
NASA Astrophysics Data System (ADS)
Maiti, Santanu K.
2014-07-01
The experimentally obtained (Venkataraman et al. [1]) cosine squared relation of electronic conductance in a biphenyl molecule is verified theoretically within a tight-binding framework. Using Green's function formalism we numerically calculate two-terminal conductance as a function of relative twist angle among the molecular rings and find that the results are in good agreement with the experimental observation.
ERIC Educational Resources Information Center
Mukala, Patrick; Cerone, Antonio; Turini, Franco
2017-01-01
Free\\Libre Open Source Software (FLOSS) environments are increasingly dubbed as learning environments where practical software engineering skills can be acquired. Numerous studies have extensively investigated how knowledge is acquired in these environments through a collaborative learning model that define a learning process. Such a learning…
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
NASA Technical Reports Server (NTRS)
Fishman, Jack; Creilson, John K.; Parker, Peter A.; Ainsworth, Elizabeth A.; Vining, G. Geoffrey; Szarka, John; Booker, Fitzgerald L.; Xu, Xiaojing
2010-01-01
Elevated concentrations of ground-level ozone (O3) are frequently measured over farmland regions in many parts of the world. While numerous experimental studies show that O3 can significantly decrease crop productivity, independent verifications of yield losses at current ambient O3 concentrations in rural locations are sparse. In this study, soybean crop yield data during a 5-year period over the Midwest of the United States were combined with ground and satellite O3 measurements to provide evidence that yield losses on the order of 10% could be estimated through the use of a multiple linear regression model. Yield loss trends based on both conventional ground-based instrumentation and satellite-derived tropospheric O3 measurements were statistically significant and were consistent with results obtained from open-top chamber experiments and an open-air experimental facility (SoyFACE, Soybean Free Air Concentration Enrichment) in central Illinois. Our analysis suggests that such losses are a relatively new phenomenon due to the increase in background tropospheric O3 levels over recent decades. Extrapolation of these findings supports previous studies that estimate the global economic loss to the farming community of more than $10 billion annually.
Identification of the numerical model of FEM in reference to measurements in situ
NASA Astrophysics Data System (ADS)
Jukowski, Michał; Bec, Jarosław; Błazik-Borowa, Ewa
2018-01-01
The paper deals with the verification of various numerical models in relation to the pilot-phase measurements of a rail bridge subjected to dynamic loading. Three types of FEM models were elaborated for this purpose. Static, modal and dynamic analyses were performed. The study consisted of measuring the acceleration values of the structural components of the object at the moment of the train passing. Based on this, FFT analysis was performed, the main natural frequencies of the bridge were determined, the structural damping ratio and the dynamic amplification factor (DAF) were calculated and compared with the standard values. Calculations were made using Autodesk Simulation Multiphysics (Algor).
Ongoing Fixed Wing Research within the NASA Langley Aeroelasticity Branch
NASA Technical Reports Server (NTRS)
Bartels, Robert; Chwalowski, Pawel; Funk, Christie; Heeg, Jennifer; Hur, Jiyoung; Sanetrik, Mark; Scott, Robert; Silva, Walter; Stanford, Bret; Wiseman, Carol
2015-01-01
The NASA Langley Aeroelasticity Branch is involved in a number of research programs related to fixed wing aeroelasticity and aeroservoelasticity. These ongoing efforts are summarized here, and include aeroelastic tailoring of subsonic transport wing structures, experimental and numerical assessment of truss-braced wing flutter and limit cycle oscillations, and numerical modeling of high speed civil transport configurations. Efforts devoted to verification, validation, and uncertainty quantification of aeroelastic physics in a workshop setting are also discussed. The feasibility of certain future civil transport configurations will depend on the ability to understand and control complex aeroelastic phenomena, a goal that the Aeroelasticity Branch is well-positioned to contribute through these programs.
Knowledge-based verification of clinical guidelines by detection of anomalies.
Duftschmid, G; Miksch, S
2001-04-01
As shown in numerous studies, a significant part of published clinical guidelines is tainted with different types of semantical errors that interfere with their practical application. The adaptation of generic guidelines, necessitated by circumstances such as resource limitations within the applying organization or unexpected events arising in the course of patient care, further promotes the introduction of defects. Still, most current approaches for the automation of clinical guidelines are lacking mechanisms, which check the overall correctness of their output. In the domain of software engineering in general and in the domain of knowledge-based systems (KBS) in particular, a common strategy to examine a system for potential defects consists in its verification. The focus of this work is to present an approach, which helps to ensure the semantical correctness of clinical guidelines in a three-step process. We use a particular guideline specification language called Asbru to demonstrate our verification mechanism. A scenario-based evaluation of our method is provided based on a guideline for the artificial ventilation of newborn infants. The described approach is kept sufficiently general in order to allow its application to several other guideline representation formats.
Evaluation and economic value of winter weather forecasts
NASA Astrophysics Data System (ADS)
Snyder, Derrick W.
State and local highway agencies spend millions of dollars each year to deploy winter operation teams to plow snow and de-ice roadways. Accurate and timely weather forecast information is critical for effective decision making. Students from Purdue University partnered with the Indiana Department of Transportation to create an experimental winter weather forecast service for the 2012-2013 winter season in Indiana to assist in achieving these goals. One forecast product, an hourly timeline of winter weather hazards produced daily, was evaluated for quality and economic value. Verification of the forecasts was performed with data from the Rapid Refresh numerical weather model. Two objective verification criteria were developed to evaluate the performance of the timeline forecasts. Using both criteria, the timeline forecasts had issues with reliability and discrimination, systematically over-forecasting the amount of winter weather that was observed while also missing significant winter weather events. Despite these quality issues, the forecasts still showed significant, but varied, economic value compared to climatology. Economic value of the forecasts was estimated to be 29.5 million or 4.1 million, depending on the verification criteria used. Limitations of this valuation system are discussed and a framework is developed for more thorough studies in the future.
Verification of Software: The Textbook and Real Problems
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2006-01-01
The process of verification, or determining the order of accuracy of computational codes, can be problematic when working with large, legacy computational methods that have been used extensively in industry or government. Verification does not ensure that the computer program is producing a physically correct solution, it ensures merely that the observed order of accuracy of solutions are the same as the theoretical order of accuracy. The Method of Manufactured Solutions (MMS) is one of several ways for determining the order of accuracy. MMS is used to verify a series of computer codes progressing in sophistication from "textbook" to "real life" applications. The degree of numerical precision in the computations considerably influenced the range of mesh density to achieve the theoretical order of accuracy even for 1-D problems. The choice of manufactured solutions and mesh form shifted the observed order in specific areas but not in general. Solution residual (iterative) convergence was not always achieved for 2-D Euler manufactured solutions. L(sub 2,norm) convergence differed variable to variable therefore an observed order of accuracy could not be determined conclusively in all cases, the cause of which is currently under investigation.
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
COBE ground segment gyro calibration
NASA Technical Reports Server (NTRS)
Freedman, I.; Kumar, V. K.; Rae, A.; Venkataraman, R.; Patt, F. S.; Wright, E. L.
1991-01-01
Discussed here is the calibration of the scale factors and rate biases for the Cosmic Background Explorer (COBE) spacecraft gyroscopes, with the emphasis on the adaptation for COBE of an algorithm previously developed for the Solar Maximum Mission. Detailed choice of parameters, convergence, verification, and use of the algorithm in an environment where the reference attitudes are determined form the Sun, Earth, and star observations (via the Diffuse Infrared Background Experiment (DIRBE) are considered. Results of some recent experiments are given. These include tests where the gyro rate data are corrected for the effect of the gyro baseplate temperature on the spacecraft electronics.
Robertson, Scott
2014-11-01
Analog gravity experiments make feasible the realization of black hole space-times in a laboratory setting and the observational verification of Hawking radiation. Since such analog systems are typically dominated by dispersion, efficient techniques for calculating the predicted Hawking spectrum in the presence of strong dispersion are required. In the preceding paper, an integral method in Fourier space is proposed for stationary 1+1-dimensional backgrounds which are asymptotically symmetric. Here, this method is generalized to backgrounds which are different in the asymptotic regions to the left and right of the scattering region.
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.
2016-01-01
Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated into the KMD-WRF runs, using the product generated by NOAA/NESDIS. Model verification capabilities are also being transitioned to KMD using NCAR's Model *Corresponding author address: Jonathan Case, ENSCO, Inc., 320 Sparkman Dr., Room 3008, Huntsville, AL, 35805. Email: Jonathan.Case-1@nasa.gov Evaluation Tools (MET; Brown et al. 2009) software in conjunction with a SPoRT-developed scripting package, in order to quantify and compare errors in simulated temperature, moisture and precipitation in the experimental WRF model simulations. This extended abstract and accompanying presentation summarizes the efforts and training done to date to support this unique regional modeling initiative at KMD. To honor the memory of Dr. Peter J. Lamb and his extensive efforts in bolstering weather and climate science and capacity-building in Africa, we offer this contribution to the special Peter J. Lamb symposium. The remainder of this extended abstract is organized as follows. The collaborating international organizations involved in the project are presented in Section 2. Background information on the unique land surface input datasets is presented in Section 3. The hands-on training sessions from March 2014 and June 2015 are described in Section 4. Sample experimental WRF output and verification from the June 2015 training are given in Section 5. A summary is given in Section 6, followed by Acknowledgements and References.
LMFBR system-wide transient analysis: the state of the art and US validation needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatib-Rahbar, M.; Guppy, J.G.; Cerbone, R.J.
1982-01-01
This paper summarizes the computational capabilities in the area of liquid metal fast breeder reactor (LMFBR) system-wide transient analysis in the United States, identifies various numerical and physical approximations, the degree of empiricism, range of applicability, model verification and experimental needs for a wide class of protected transients, in particular, natural circulation shutdown heat removal for both loop- and pool-type plants.
NASA Astrophysics Data System (ADS)
Varseev, E.
2017-11-01
The present work is dedicated to verification of numerical model in standard solver of open-source CFD code OpenFOAM for two-phase flow simulation and to determination of so-called “baseline” model parameters. Investigation of heterogeneous coolant flow parameters, which leads to abnormal friction increase of channel in two-phase adiabatic “water-gas” flows with low void fractions, presented.
Combustion Fundamentals Research
NASA Technical Reports Server (NTRS)
1983-01-01
Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.
The study of thermal processes in control systems of heat consumption of buildings
NASA Astrophysics Data System (ADS)
Tsynaeva, E.; A, Tsynaeva
2017-11-01
The article discusses the main thermal processes in the automated control systems for heat consumption (ACSHC) of buildings, schematic diagrams of these systems, mathematical models used for description of thermal processes in ACSHC. Conducted verification represented by mathematical models. It was found that the efficiency of the operation of ACSHC depend from the external and internal factors. Numerical study of dynamic modes of operation of ACSHC.
Li, Y; Nielsen, P V
2011-12-01
There has been a rapid growth of scientific literature on the application of computational fluid dynamics (CFD) in the research of ventilation and indoor air science. With a 1000-10,000 times increase in computer hardware capability in the past 20 years, CFD has become an integral part of scientific research and engineering development of complex air distribution and ventilation systems in buildings. This review discusses the major and specific challenges of CFD in terms of turbulence modelling, numerical approximation, and boundary conditions relevant to building ventilation. We emphasize the growing need for CFD verification and validation, suggest ongoing needs for analytical and experimental methods to support the numerical solutions, and discuss the growing capacity of CFD in opening up new research areas. We suggest that CFD has not become a replacement for experiment and theoretical analysis in ventilation research, rather it has become an increasingly important partner. We believe that an effective scientific approach for ventilation studies is still to combine experiments, theory, and CFD. We argue that CFD verification and validation are becoming more crucial than ever as more complex ventilation problems are solved. It is anticipated that ventilation problems at the city scale will be tackled by CFD in the next 10 years. © 2011 John Wiley & Sons A/S.
Osuch, Tomasz; Markowski, Konrad; Jędrzejewski, Kazimierz
2015-06-10
A versatile numerical model for spectral transmission/reflection, group delay characteristic analysis, and design of tapered fiber Bragg gratings (TFBGs) is presented. This approach ensures flexibility with defining both distribution of refractive index change of the gratings (including apodization) and shape of the taper profile. Additionally, sensing and tunable dispersion properties of the TFBGs were fully examined, considering strain-induced effects. The presented numerical approach, together with Pareto optimization, were also used to design the best tanh apodization profiles of the TFBG in terms of maximizing its spectral width with simultaneous minimization of the group delay oscillations. Experimental verification of the model confirms its correctness. The combination of model versatility and possibility to define the other objective functions of Pareto optimization creates a universal tool for TFBG analysis and design.
WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, Kelley; Michelen, Carlos; Bosma, Bret
2016-08-01
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less
Center for Extended Magnetohydrodynamics Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos, Jesus
This researcher participated in the DOE-funded Center for Extended Magnetohydrodynamics Modeling (CEMM), a multi-institutional collaboration led by the Princeton Plasma Physics Laboratory with Dr. Stephen Jardin as the overall Principal Investigator. This project developed advanced simulation tools to study the non-linear macroscopic dynamics of magnetically confined plasmas. The collaborative effort focused on the development of two large numerical simulation codes, M3D-C1 and NIMROD, and their application to a wide variety of problems. Dr. Ramos was responsible for theoretical aspects of the project, deriving consistent sets of model equations applicable to weakly collisional plasmas and devising test problems for verification ofmore » the numerical codes. This activity was funded for twelve years.« less
The Good, the Bad, and the Ugly: Numerical Prediction for Hurricane Juan (2003)
NASA Astrophysics Data System (ADS)
Gyakum, J.; McTaggart-Cowan, R.
2004-05-01
The range of accuracy of the numerical weather prediction (NWP) guidance for the landfall of Hurricane Juan (2003), from nearly perfect to nearly useless, motivates a study of the NWP forecast errors on 28-29 September 2003 in the eastern North Atlantic. Although the forecasts issued over the period were of very high quality, this is primarily because of the diligence of the forecasters, and not related to the reliability of the numerical predictions provided to them by the North American operational centers and the research community. A bifurcation in the forecast fields from various centers and institutes occurred beginning with the 0000 UTC run of 28 September, and continuing until landfall just after 0000 UTC on 29 September. The GFS (NCEP), Eta (NCEP), GEM (Canadian Meteorological Centre; CMC), and MC2 (McGill) forecast models all showed an extremely weak (minimum SLP above 1000 hPa) remnant vortex moving north-northwestward into the Gulf of Maine and merging with a diabatically-developed surface low offshore. The GFS uses a vortex-relocation scheme, the Eta a vortex bogus, and the GEM and MC2 are run on CMC analyses that contain no enhanced vortex. The UK Met Office operational, the GFDL, and the NOGAPS (US Navy) forecast models all ran a small-scale hurricane-like vortex directly into Nova Scotia and verified very well for this case. The UKMO model uses synthetic observations to enhance structures in poorly-forecasted areas during the analysis cycle and both the GFDL and NOGAPS model use advanced idealized vortex bogusing in their initial conditions. The quality of the McGill MC2 forecast is found to be significantly enhanced using a bogusing technique similar to that used in the initialization of the successful forecast models. A verification of the improved forecast is presented along with a discussion of the need for operational quality control of the background fields in the analysis cycle and for proper representation of strong, small-scale tropical vortices.
International Space Station Passive Thermal Control System Analysis, Top Ten Lessons-Learned
NASA Technical Reports Server (NTRS)
Iovine, John
2011-01-01
The International Space Station (ISS) has been on-orbit for over 10 years, and there have been numerous technical challenges along the way from design to assembly to on-orbit anomalies and repairs. The Passive Thermal Control System (PTCS) management team has been a key player in successfully dealing with these challenges. The PTCS team performs thermal analysis in support of design and verification, launch and assembly constraints, integration, sustaining engineering, failure response, and model validation. This analysis is a significant body of work and provides a unique opportunity to compile a wealth of real world engineering and analysis knowledge and the corresponding lessons-learned. The analysis lessons encompass the full life cycle of flight hardware from design to on-orbit performance and sustaining engineering. These lessons can provide significant insight for new projects and programs. Key areas to be presented include thermal model fidelity, verification methods, analysis uncertainty, and operations support.
Development of a Three-Dimensional, Unstructured Material Response Design Tool
NASA Technical Reports Server (NTRS)
Schulz, Joseph C.; Stern, Eric C.; Muppidi, Suman; Palmer, Grant E.; Schroeder, Olivia
2017-01-01
A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. This extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2010-07-01
Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.
NASA Technical Reports Server (NTRS)
Kalnay, Eugenia; Dalcher, Amnon
1987-01-01
It is shown that it is possible to predict the skill of numerical weather forecasts - a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite-data-impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems. When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such a large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when regional verifications were used, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.
Numerical and experimental studies of hydrodynamics of flapping foils
NASA Astrophysics Data System (ADS)
Zhou, Kai; Liu, Jun-kao; Chen, Wei-shan
2018-04-01
The flapping foil based on bionics is a sort of simplified models which imitate the motion of wings or fins of fish or birds. In this paper, a universal kinematic model with three degrees of freedom is adopted and the motion parallel to the flow direction is considered. The force coefficients, the torque coefficient, and the flow field characteristics are extracted and analyzed. Then the propulsive efficiency is calculated. The influence of the motion parameters on the hydrodynamic performance of the bionic foil is studied. The results show that the motion parameters play important roles in the hydrodynamic performance of the flapping foil. To validate the reliability of the numerical method used in this paper, an experiment platform is designed and verification experiments are carried out. Through the comparison, it is found that the numerical results compare well with the experimental results, to show that the adopted numerical method is reliable. The results of this paper provide a theoretical reference for the design of underwater vehicles based on the flapping propulsion.
Vortex generator design for aircraft inlet distortion as a numerical optimization problem
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Levy, Ralph
1991-01-01
Aerodynamic compatibility of aircraft/inlet/engine systems is a difficult design problem for aircraft that must operate in many different flight regimes. Takeoff, subsonic cruise, supersonic cruise, transonic maneuvering, and high altitude loiter each place different constraints on inlet design. Vortex generators, small wing like sections mounted on the inside surfaces of the inlet duct, are used to control flow separation and engine face distortion. The design of vortex generator installations in an inlet is defined as a problem addressable by numerical optimization techniques. A performance parameter is suggested to account for both inlet distortion and total pressure loss at a series of design flight conditions. The resulting optimization problem is difficult since some of the design parameters take on integer values. If numerical procedures could be used to reduce multimillion dollar development test programs to a small set of verification tests, numerical optimization could have a significant impact on both cost and elapsed time to design new aircraft.
NASA Astrophysics Data System (ADS)
Jansen van Rensburg, Gerhardus J.; Kok, Schalk; Wilke, Daniel N.
2018-03-01
This paper presents the development and numerical implementation of a state variable based thermomechanical material model, intended for use within a fully implicit finite element formulation. Plastic hardening, thermal recovery and multiple cycles of recrystallisation can be tracked for single peak as well as multiple peak recrystallisation response. The numerical implementation of the state variable model extends on a J2 isotropic hypo-elastoplastic modelling framework. The complete numerical implementation is presented as an Abaqus UMAT and linked subroutines. Implementation is discussed with detailed explanation of the derivation and use of various sensitivities, internal state variable management and multiple recrystallisation cycle contributions. A flow chart explaining the proposed numerical implementation is provided as well as verification on the convergence of the material subroutine. The material model is characterised using two high temperature data sets for cobalt and copper. The results of finite element analyses using the material parameter values characterised on the copper data set are also presented.
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford
2018-04-01
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.
Experimental verification of numerical calculations of railway passenger seats
NASA Astrophysics Data System (ADS)
Ligaj, B.; Wirwicki, M.; Karolewska, K.; Jasińska, A.
2018-04-01
The construction of railway seats is based on industry regulations and the requirements of end users, i.e. passengers. The two main documents in this context are the UIC 566 (3rd Edition, dated 7 January 1994) and the EN 12663-1: 2010+A1:2014. The study was to carry out static load tests of passenger seat frames. The paper presents the construction of the test bench and the results of experimental and numerical studies of passenger seat rail frames. The test bench consists of a frame, a transverse beam, two electric cylinders with a force value of 6 kN, and a strain gauge amplifier. It has a modular structure that allows for its expansion depending on the structure of the seats. Comparing experimental results with numerical results for points A and B allowed to determine the existing differences. It follows from it that higher stress values are obtained by numerical calculations in the range of 0.2 MPa to 35.9 MPa.
Numerical Investigations of Moisture Distribution in a Selected Anisotropic Soil Medium
NASA Astrophysics Data System (ADS)
Iwanek, M.
2018-01-01
The moisture of soil profile changes both in time and space and depends on many factors. Changes of the quantity of water in soil can be determined on the basis of in situ measurements, but numerical methods are increasingly used for this purpose. The quality of the results obtained using pertinent software packages depends on appropriate description and parameterization of soil medium. Thus, the issue of providing for the soil anisotropy phenomenon gains a big importance. Although anisotropy can be taken into account in many numerical models, isotopic soil is often assumed in the research process. However, this assumption can be a reason for incorrect results in the simulations of water changes in soil medium. In this article, results of numerical simulations of moisture distribution in the selected soil profile were presented. The calculations were conducted assuming isotropic and anisotropic conditions. Empirical verification of the results obtained in the numerical investigations indicated statistical essential discrepancies for the both analyzed conditions. However, better fitting measured and calculated moisture values was obtained for the case of providing for anisotropy in the simulation model.
NASA Astrophysics Data System (ADS)
Khait, A.; Shemer, L.
2018-05-01
The evolution of unidirectional wave trains containing a wave that gradually becomes steep is evaluated experimentally and numerically using the Boundary Element Method (BEM). The boundary conditions for the nonlinear numerical simulations corresponded to the actual movements of the wavemaker paddle as recorded in the physical experiments, allowing direct comparison between the measured in experiments' characteristics of the wave train and the numerical predictions. The high level of qualitative and quantitative agreement between the measurements and simulations validated the kinematic criterion for the inception of breaking and the location of the spilling breaker, on the basis of the BEM computations and associated experiments. The breaking inception is associated with the fluid particle at the crest of the steep wave that has been accelerated to match and surpass the crest velocity. The previously observed significant slow-down of the crest while approaching breaking is verified numerically; both narrow-/broad-banded wave trains are considered. Finally, the relative importance of linear and nonlinear contributions is analyzed.
Investigation of advanced phase-shifting projected fringe profilometry techniques
NASA Astrophysics Data System (ADS)
Liu, Hongyu
1999-11-01
The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.
Statistical Time Series Models of Pilot Control with Applications to Instrument Discrimination
NASA Technical Reports Server (NTRS)
Altschul, R. E.; Nagel, P. M.; Oliver, F.
1984-01-01
A general description of the methodology used in obtaining the transfer function models and verification of model fidelity, frequency domain plots of the modeled transfer functions, numerical results obtained from an analysis of poles and zeroes obtained from z plane to s-plane conversions of the transfer functions, and the results of a study on the sequential introduction of other variables, both exogenous and endogenous into the loop are contained.
Simulation and Experimental Study on Cavitating Water Jet Nozzle
NASA Astrophysics Data System (ADS)
Zhou, Wei; He, Kai; Cai, Jiannan; Hu, Shaojie; Li, Jiuhua; Du, Ruxu
2017-01-01
Cavitating water jet technology is a new kind of water jet technology with many advantages, such as energy-saving, efficient, environmentally-friendly and so on. Based on the numerical simulation and experimental verification in this paper, the research on cavitating nozzle has been carried out, which includes comparison of the cleaning ability of the cavitating jet and the ordinary jet, and comparison of cavitation effects of different structures of cavitating nozzles.
Experimental setup for the measurement of induction motor cage currents
NASA Astrophysics Data System (ADS)
Bottauscio, Oriano; Chiampi, Mario; Donadio, Lorenzo; Zucca, Mauro
2005-04-01
An experimental setup for measurement of the currents flowing in the rotor bars of induction motors during synchronous no-load tests is described in the paper. The experimental verification of the high-frequency phenomena in the rotor cage is fundamental for a deep insight of the additional loss estimation by numerical methods. The attention is mainly focused on the analysis and design of the transducers developed for the cage current measurement.
Verification and Validation of the Coastal Modeling System. Report 3: CMS-Flow: Hydrodynamics
2011-12-01
Jansen (1978) Spectrum TMA Directional spreading distribution Cosine Power Directional spreading parameter γ 3.3 Bottom friction Off (default...Ramp duration 3 hr The wave breaking formula applied was Battjes and Jansen (1978) because it is the recommended wave breaking formula when using...Li, Z.H., K.D. Nguyen , J.C. Brun-Cottan and J.M. Martin. 1994. Numerical simulation of the turbidity maximum transport in the Gironde Estuary (France
Adaption of space station technology for lunar operations
NASA Technical Reports Server (NTRS)
Garvey, J. M.
1992-01-01
Space Station Freedom technology will have the potential for numerous applications in an early lunar base program. The benefits of utilizing station technology in such a fashion include reduced development and facility costs for lunar base systems, shorter schedules, and verification of such technology through space station experience. This paper presents an assessment of opportunities for using station technology in a lunar base program, particularly in the lander/ascent vehicles and surface modules.
Ionospheric Modeling: Development, Verification and Validation
2007-08-15
The University of Massachusetts (UMass), Lowell, has introduced a new version of their ionogram autoscaling program ARTIST , Version 5. A very...Investigation of the Reliability of the ESIR Ionogram Autoscaling Method (Expert System for Ionogram Reduction) ESIR.book.pdf Dec 06 Quality...Figures and Error Bars for Autoscaled Vertical Incidence Ionograms. Background and User Documentation for QualScan V2007.2 AFRL_QualScan.book.pdf Feb
Establish an Agent-Simulant Technology Relationship (ASTR)
2017-04-14
for quantitative measures that characterize simulant performance in testing , such as the ability to be removed from surfaces. Component-level ASTRs...Overall Test and Agent-Simulant Technology Relationship (ASTR) process. 1.2 Background. a. Historically, many tests did not develop quantitative ...methodology report14. Report provides a VX-TPP ASTR for post -decon contact hazard and off- gassing. In the Stryker production verification test (PVT
Cohesion: The Vital Ingredient for Successful Army Units
1982-04-19
responding in military life as well. A special problem of social cohesion directly related to social background was the integration of minority troops...forces has been a powerful verification of sociological theory concerning social cohesion and organizational effectiveness. Sociological theory does not...prevent the development of groups with social cohesion committed to the military hierarchy. 2 5 Personality of Wnit Mmbers Among the characteristics
Cassini's RTGs undergo mechanical and electrical verification testing in the PHSF
NASA Technical Reports Server (NTRS)
1997-01-01
Jet Propulsion Laboratory (JPL) workers carefully roll into place a platform with a second radioisotope thermoelectric generator (RTG) for installation on the Cassini spacecraft. In background at left, the first of three RTGs already has been installed on Cassini. The RTGs will provide electrical power to Cassini on its 6.7-year trip to the Saturnian system and during its four-year mission at Saturn. The power units are undergoing mechanical and electrical verification testing in the Payload Hazardous Servicing Facility. RTGs use heat from the natural decay of plutonium to generate electric power. The generators enable spacecraft to operate far from the Sun where solar power systems are not feasible. The Cassini mission is scheduled for an Oct. 6 launch aboard a Titan IVB/Centaur expendable launch vehicle. Cassini is built and managed for NASA by JPL.
Mineral mapping in the Maherabad area, eastern Iran, using the HyMap remote sensing data
NASA Astrophysics Data System (ADS)
Molan, Yusuf Eshqi; Refahi, Davood; Tarashti, Ali Hoseinmardi
2014-04-01
This study applies matched filtering on the HyMap airborne hyperspectral data to obtain the distribution map of alteration minerals in the Maherabad area and uses virtual verification to verify the results. This paper also introduces "moving threshold" which tries to find an appropriate threshold value to convert gray scale images, produced by mapping methods, to target and background pixels. The Maherabad area, located in the eastern part of the Lut block, is a Cu-Au porphyry system in which quartz-sericite-pyrite, argillic and propylitic alteration are most common. Minimum noise fraction transform coupled with a pixel purity index was applied on the HyMap images to extract the endmembers of the alteration minerals, including kaolinite, montmorillonite, sericite (muscovite/illite), calcite, chlorite, epidote, and goethite. Since there was no access to any portable spectrometer and/or lab spectral measurements for the verification of the remote sensing imagery results, virtual verification achieved using the USGS spectral library and showed an agreement of 83.19%. The comparison between the results of the matched filtering and X-ray diffraction (XRD) analyses also showed an agreement of 56.13%.
NASA Astrophysics Data System (ADS)
Capiński, Maciej J.; Gidea, Marian; de la Llave, Rafael
2017-01-01
We present a diffusion mechanism for time-dependent perturbations of autonomous Hamiltonian systems introduced in Gidea (2014 arXiv:1405.0866). This mechanism is based on shadowing of pseudo-orbits generated by two dynamics: an ‘outer dynamics’, given by homoclinic trajectories to a normally hyperbolic invariant manifold, and an ‘inner dynamics’, given by the restriction to that manifold. On the inner dynamics the only assumption is that it preserves area. Unlike other approaches, Gidea (2014 arXiv:1405.0866) does not rely on the KAM theory and/or Aubry-Mather theory to establish the existence of diffusion. Moreover, it does not require to check twist conditions or non-degeneracy conditions near resonances. The conditions are explicit and can be checked by finite precision calculations in concrete systems (roughly, they amount to checking that Melnikov-type integrals do not vanish and that some manifolds are transversal). As an application, we study the planar elliptic restricted three-body problem. We present a rigorous theorem that shows that if some concrete calculations yield a non zero value, then for any sufficiently small, positive value of the eccentricity of the orbits of the main bodies, there are orbits of the infinitesimal body that exhibit a change of energy that is bigger than some fixed number, which is independent of the eccentricity. We verify numerically these calculations for values of the masses close to that of the Jupiter/Sun system. The numerical calculations are not completely rigorous, because we ignore issues of round-off error and do not estimate the truncations, but they are not delicate at all by the standard of numerical analysis. (Standard tests indicate that we get 7 or 8 figures of accuracy where 1 would be enough.) The code of these verifications is available. We hope that some full computer assisted proofs will be obtained in the near future since there are packages (CAPD) designed for problems of this type.
Three-dimensional surface contouring of macroscopic objects by means of phase-difference images.
Velásquez Prieto, Daniel; Garcia-Sucerquia, Jorge
2006-09-01
We report a technique to determine the 3D contour of objects with dimensions of at least 4 orders of magnitude larger than the illumination optical wavelength. Our proposal is based on the numerical reconstruction of the optical wave field of digitally recorded holograms. The required modulo 2pi phase map in any contouring process is obtained by means of the direct subtraction of two phase-contrast images under different illumination angles to create a phase-difference image of a still object. Obtaining the phase-difference images is only possible by using the capability of numerical reconstruction of the complex optical field provided by digital holography. This unique characteristic leads us to a robust, reliable, and fast procedure that requires only two images. A theoretical analysis of the contouring system is shown, with verification by means of numerical and experimental results.
NASA Astrophysics Data System (ADS)
Sliseris, J.; Yan, L.; Kasal, B.
2017-09-01
Numerical methods for simulating hollow and foam-filled flax-fabric-reinforced epoxy tubular energy absorbers subjected to lateral crashing are presented. The crashing characteristics, such as the progressive failure, load-displacement response, absorbed energy, peak load, and failure modes, of the tubes were simulated and calculated numerically. A 3D nonlinear finite-element model that allows for the plasticity of materials using an isotropic hardening model with strain rate dependence and failure is proposed. An explicit finite-element solver is used to address the lateral crashing of the tubes considering large displacements and strains, plasticity, and damage. The experimental nonlinear crashing load vs. displacement data are successfully described by using the finite-element model proposed. The simulated peak loads and absorbed energy of the tubes are also in good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Kut, Stanislaw; Ryzinska, Grazyna; Niedzialek, Bernadetta
2016-01-01
The article presents the results of tests in order to verifying the effectiveness of the nine selected elastomeric material models (Neo-Hookean, Mooney with two and three constants, Signorini, Yeoh, Ogden, Arruda-Boyce, Gent and Marlow), which the material constants were determined in one material test - the uniaxial tension testing. The convergence assessment of nine analyzed models were made on the basis of their performance from an experimental bending test of the elastomer samples from the results of numerical calculations FEM for each material models. To calculate the material constants for the analyzed materials, a model has been generated by the stressstrain characteristics created as a result of experimental uniaxial tensile test with elastomeric dumbbell samples, taking into account the parameters received in its 18th cycle. Using such a calculated material constants numerical simulation of the bending process of a elastomeric, parallelepipedic sampleswere carried out using MARC / Mentat program.
Using Automation to Improve the Flight Software Testing Process
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Andrews, Stephen F.; Morgenstern, Wendy M.; Bartholomew, Maureen O.; McComas, David C.; Bauer, Frank H. (Technical Monitor)
2001-01-01
One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, attitude control, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on previous missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the perceived benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.
Using Automation to Improve the Flight Software Testing Process
NASA Technical Reports Server (NTRS)
ODonnell, James R., Jr.; Morgenstern, Wendy M.; Bartholomew, Maureen O.
2001-01-01
One of the critical phases in the development of a spacecraft attitude control system (ACS) is the testing of its flight software. The testing (and test verification) of ACS flight software requires a mix of skills involving software, knowledge of attitude control, and attitude control hardware, data manipulation, and analysis. The process of analyzing and verifying flight software test results often creates a bottleneck which dictates the speed at which flight software verification can be conducted. In the development of the Microwave Anisotropy Probe (MAP) spacecraft ACS subsystem, an integrated design environment was used that included a MAP high fidelity (HiFi) simulation, a central database of spacecraft parameters, a script language for numeric and string processing, and plotting capability. In this integrated environment, it was possible to automate many of the steps involved in flight software testing, making the entire process more efficient and thorough than on previous missions. In this paper, we will compare the testing process used on MAP to that used on other missions. The software tools that were developed to automate testing and test verification will be discussed, including the ability to import and process test data, synchronize test data and automatically generate HiFi script files used for test verification, and an automated capability for generating comparison plots. A summary of the benefits of applying these test methods on MAP will be given. Finally, the paper will conclude with a discussion of re-use of the tools and techniques presented, and the ongoing effort to apply them to flight software testing of the Triana spacecraft ACS subsystem.
A Verification-Driven Approach to Control Analysis and Tuning
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2008-01-01
This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Chacón, L.; Cappello, S.
2010-08-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacón, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code in cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonfiglio, Daniele; Chacon, Luis; Cappello, Susanna
2010-01-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacon, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code inmore » cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.« less
Verification of Meteorological and Oceanographic Ensemble Forecasts in the U.S. Navy
NASA Astrophysics Data System (ADS)
Klotz, S.; Hansen, J.; Pauley, P.; Sestak, M.; Wittmann, P.; Skupniewicz, C.; Nelson, G.
2013-12-01
The Navy Ensemble Forecast Verification System (NEFVS) has been promoted recently to operational status at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). NEFVS processes FNMOC and National Centers for Environmental Prediction (NCEP) meteorological and ocean wave ensemble forecasts, gridded forecast analyses, and innovation (observational) data output by FNMOC's data assimilation system. The NEFVS framework consists of statistical analysis routines, a variety of pre- and post-processing scripts to manage data and plot verification metrics, and a master script to control application workflow. NEFVS computes metrics that include forecast bias, mean-squared error, conditional error, conditional rank probability score, and Brier score. The system also generates reliability and Receiver Operating Characteristic diagrams. In this presentation we describe the operational framework of NEFVS and show examples of verification products computed from ensemble forecasts, meteorological observations, and forecast analyses. The construction and deployment of NEFVS addresses important operational and scientific requirements within Navy Meteorology and Oceanography. These include computational capabilities for assessing the reliability and accuracy of meteorological and ocean wave forecasts in an operational environment, for quantifying effects of changes and potential improvements to the Navy's forecast models, and for comparing the skill of forecasts from different forecast systems. NEFVS also supports the Navy's collaboration with the U.S. Air Force, NCEP, and Environment Canada in the North American Ensemble Forecast System (NAEFS) project and with the Air Force and the National Oceanic and Atmospheric Administration (NOAA) in the National Unified Operational Prediction Capability (NUOPC) program. This program is tasked with eliminating unnecessary duplication within the three agencies, accelerating the transition of new technology, such as multi-model ensemble forecasting, to U.S. Department of Defense use, and creating a superior U.S. global meteorological and oceanographic prediction capability. Forecast verification is an important component of NAEFS and NUOPC. Distribution Statement A: Approved for Public Release; distribution is unlimited
Verification of Meteorological and Oceanographic Ensemble Forecasts in the U.S. Navy
NASA Astrophysics Data System (ADS)
Klotz, S. P.; Hansen, J.; Pauley, P.; Sestak, M.; Wittmann, P.; Skupniewicz, C.; Nelson, G.
2012-12-01
The Navy Ensemble Forecast Verification System (NEFVS) has been promoted recently to operational status at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). NEFVS processes FNMOC and National Centers for Environmental Prediction (NCEP) meteorological and ocean wave ensemble forecasts, gridded forecast analyses, and innovation (observational) data output by FNMOC's data assimilation system. The NEFVS framework consists of statistical analysis routines, a variety of pre- and post-processing scripts to manage data and plot verification metrics, and a master script to control application workflow. NEFVS computes metrics that include forecast bias, mean-squared error, conditional error, conditional rank probability score, and Brier score. The system also generates reliability and Receiver Operating Characteristic diagrams. In this presentation we describe the operational framework of NEFVS and show examples of verification products computed from ensemble forecasts, meteorological observations, and forecast analyses. The construction and deployment of NEFVS addresses important operational and scientific requirements within Navy Meteorology and Oceanography (METOC). These include computational capabilities for assessing the reliability and accuracy of meteorological and ocean wave forecasts in an operational environment, for quantifying effects of changes and potential improvements to the Navy's forecast models, and for comparing the skill of forecasts from different forecast systems. NEFVS also supports the Navy's collaboration with the U.S. Air Force, NCEP, and Environment Canada in the North American Ensemble Forecast System (NAEFS) project and with the Air Force and the National Oceanic and Atmospheric Administration (NOAA) in the National Unified Operational Prediction Capability (NUOPC) program. This program is tasked with eliminating unnecessary duplication within the three agencies, accelerating the transition of new technology, such as multi-model ensemble forecasting, to U.S. Department of Defense use, and creating a superior U.S. global meteorological and oceanographic prediction capability. Forecast verification is an important component of NAEFS and NUOPC.
Assessment of wind energy potential in Poland
NASA Astrophysics Data System (ADS)
Starosta, Katarzyna; Linkowska, Joanna; Mazur, Andrzej
2014-05-01
The aim of the presentation is to show the suitability of using numerical model wind speed forecasts for the wind power industry applications in Poland. In accordance with the guidelines of the European Union, the consumption of wind energy in Poland is rapidly increasing. According to the report of Energy Regulatory Office from 30 March 2013, the installed capacity of wind power in Poland was 2807MW from 765 wind power stations. Wind energy is strongly dependent on the meteorological conditions. Based on the climatological wind speed data, potential energy zones within the area of Poland have been developed (H. Lorenc). They are the first criterion for assessing the location of the wind farm. However, for exact monitoring of a given wind farm location the prognostic data from numerical model forecasts are necessary. For the practical interpretation and further post-processing, the verification of the model data is very important. Polish Institute Meteorology and Water Management - National Research Institute (IMWM-NRI) runs an operational model COSMO (Consortium for Small-scale Modelling, version 4.8) using two nested domains at horizontal resolutions of 7 km and 2.8 km. The model produces 36 hour and 78 hour forecasts from 00 UTC, for 2.8 km and 7 km domain resolutions respectively. Numerical forecasts were compared with the observation of 60 SYNOP and 3 TEMP stations in Poland, using VERSUS2 (Unified System Verification Survey 2) and R package. For every zone the set of statistical indices (ME, MAE, RMSE) was calculated. Forecast errors for aerological profiles are shown for Polish TEMP stations at Wrocław, Legionowo and Łeba. The current studies are connected with a topic of the COST ES1002 WIRE-Weather Intelligence for Renewable Energies.
NASA Astrophysics Data System (ADS)
Gabriel, Alice; Pelties, Christian
2014-05-01
In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.
Direct measurements of the Gibbs free energy of OH using a CW tunable laser
NASA Technical Reports Server (NTRS)
Killinger, D. K.; Wang, C. C.
1979-01-01
The paper describes an absorption measurement for determining the Gibbs free energy of OH generated in a mixture of water and oxygen vapor. These measurements afford a direct verification of the accuracy of thermochemical data of H2O at high temperatures and pressures. The results indicate that values for the heat capacity of H2O obtained through numerical computations are correct within an experimental uncertainty of 0.15 cal/mole K.
NASA Astrophysics Data System (ADS)
Eça, L.; Hoekstra, M.
2015-11-01
We have to start by thanking the authors of this comment for their interest in our work. While quality assurance is being consolidated in several business areas, the importance of Verification and Validation (V&V) is still not fully appreciated in the CFD community. More attention for the topic is welcome.
FY16 NRL DoD High Performance Computing Modernization Program
2017-09-15
explored both wind and wave forcing in the numerical wave tank. The model uses high spatial and temporal resolution and a multi-phase formulation to...Results: The ADVED_NS code was used to predict the effect of the standoff distance between micron- diameter wires and flow frequency on the total...contours for a flow over 3D wire mesh. Figure 2 shows verifications comparing computed and theoretical drag forces for the flow over two cylinders in an
Doty, Michael A.
1997-01-01
A system and method for simultaneously collecting serial number information reports from numerous colliding coded-radio-frequency identity tags. Each tag has a unique multi-digit serial number that is stored in non-volatile RAM. A reader transmits an ASCII coded "D" character on a carrier of about 900 MHz and a power illumination field having a frequency of about 1.6 Ghz. A one MHz tone is modulated on the 1.6 Ghz carrier as a timing clock for a microprocessor in each of the identity tags. Over a thousand such tags may be in the vicinity and each is powered-up and clocked by the 1.6 Ghz power illumination field. Each identity tag looks for the "D" interrogator modulated on the 900 MHz carrier, and each uses a digit of its serial number to time a response. Clear responses received by the reader are repeated for verification. If no verification or a wrong number is received by any identity tag, it uses a second digital together with the first to time out a more extended period for response. Ultimately, the entire serial number will be used in the worst case collision environments; and since the serial numbers are defined as being unique, the final possibility will be successful because a clear time-slot channel will be available.
Doty, M.A.
1997-01-07
A system and method are disclosed for simultaneously collecting serial number information reports from numerous colliding coded-radio-frequency identity tags. Each tag has a unique multi-digit serial number that is stored in non-volatile RAM. A reader transmits an ASCII coded ``D`` character on a carrier of about 900 MHz and a power illumination field having a frequency of about 1.6 Ghz. A one MHz tone is modulated on the 1.6 Ghz carrier as a timing clock for a microprocessor in each of the identity tags. Over a thousand such tags may be in the vicinity and each is powered-up and clocked by the 1.6 Ghz power illumination field. Each identity tag looks for the ``D`` interrogator modulated on the 900 MHz carrier, and each uses a digit of its serial number to time a response. Clear responses received by the reader are repeated for verification. If no verification or a wrong number is received by any identity tag, it uses a second digital together with the first to time out a more extended period for response. Ultimately, the entire serial number will be used in the worst case collision environments; and since the serial numbers are defined as being unique, the final possibility will be successful because a clear time-slot channel will be available. 5 figs.
Defense Advanced Research Projects Agency Technology Transition
1997-01-01
detection of nuclear testing in space , navigation, meteo- rological monitoring, and communication. These early activities were transferred to the Military...used to detect nuclear tests in space and in the atmosphere as part of the overall basis for verification of a future nuclear test ban treaty. The first...background data to detect nuclear explosions taking place in space , and eventually also in the earth’s atmosphere. The program developed x-ray, neutron
Verification of Minimum Detectable Activity for Radiological Threat Source Search
NASA Astrophysics Data System (ADS)
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
Three years of operational experience from Schauinsland CTBT monitoring station.
Zähringer, M; Bieringer, J; Schlosser, C
2008-04-01
Data from three years of operation of a low-level aerosol sampler and analyzer (RASA) at Schauinsland monitoring station are reported. The system is part of the International Monitoring System (IMS) for verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The fully automatic system is capable to measure aerosol borne gamma emitters with high sensitivity and routinely quantifies 7Be and 212Pb. The system had a high level of data availability of 90% within the reporting period. A daily screening process rendered 66 tentative identifications of verification relevant radionuclides since the system entered IMS operation in February 2004. Two of these were real events and associated to a plausible source. The remaining 64 cases can consistently be explained by detector background and statistical phenomena. Inter-comparison with data from a weekly sampler operated at the same station shows instabilities of the calibration during the test phase and a good agreement since certification of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, J. N.; Chin, M. R.; Sjoden, G. E.
2013-07-01
A mobile 'drive by' passive radiation detection system to be applied in special nuclear materials (SNM) storage facilities for validation and compliance purposes has been designed through the use of computational modeling and new radiation detection methods. This project was the result of work over a 1 year period to create optimal design specifications to include creation of 3D models using both Monte Carlo and deterministic codes to characterize the gamma and neutron leakage out each surface of SNM-bearing canisters. Results were compared and agreement was demonstrated between both models. Container leakages were then used to determine the expected reactionmore » rates using transport theory in the detectors when placed at varying distances from the can. A 'typical' background signature was incorporated to determine the minimum signatures versus the probability of detection to evaluate moving source protocols with collimation. This established the criteria for verification of source presence and time gating at a given vehicle speed. New methods for the passive detection of SNM were employed and shown to give reliable identification of age and material for highly enriched uranium (HEU) and weapons grade plutonium (WGPu). The finalized 'Mobile Pit Verification System' (MPVS) design demonstrated that a 'drive-by' detection system, collimated and operating at nominally 2 mph, is capable of rapidly verifying each and every weapon pit stored in regularly spaced, shelved storage containers, using completely passive gamma and neutron signatures for HEU and WGPu. This system is ready for real evaluation to demonstrate passive total material accountability in storage facilities. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R.M.; Harding, J.M.; Pollak, K.D.
1992-02-01
Global-scale analyses of ocean thermal structure produced operationally at the U.S. Navy`s Fleet Numerical Oceanography Center are verified, along with an ocean thermal climatology, against unassimilated bathythermograph (bathy), satellite multichannel sea surface temperature (MCSST), and ship sea surface temperature (SST) data. Verification statistics are calculated from the three types of data for February-April of 1988 and February-April of 1990 in nine verification areas covering most of the open ocean in the Northern Hemisphere. The analyzed thermal fields were produced by version 1.0 of the Optimum Thermal Interpolation System (OTIS 1.0) in 1988, but by an upgraded version of this model,more » referred to as OTIS 1.1, in 1990. OTIS 1.1 employs exactly the same analysis methodology as OTIS 1.0. The principal difference is that OTIS 1.1 has twice the spatial resolution of OTIS 1.0 and consequently uses smaller spatial decorrelation scales and noise-to-signal ratios. As a result, OTIS 1.1 is able to represent more horizontal detail in the ocean thermal fields than its predecessor. Verification statistics for the SST fields derived from bathy and MCSST data are consistent with each other, showing similar trends and error levels. These data indicate that the analyzed SST fields are more accurate in 1990 than in 1988, and generally more accurate than climatology for both years. Verification statistics for the SST fields derived from ship data are inconsistent with those derived from the bathy and MCSST data, and show much higher error levels indicative of observational noise.« less
Benchmarking desktop and mobile handwriting across COTS devices: The e-BioSign biometric database
Tolosana, Ruben; Vera-Rodriguez, Ruben; Fierrez, Julian; Morales, Aythami; Ortega-Garcia, Javier
2017-01-01
This paper describes the design, acquisition process and baseline evaluation of the new e-BioSign database, which includes dynamic signature and handwriting information. Data is acquired from 5 different COTS devices: three Wacom devices (STU-500, STU-530 and DTU-1031) specifically designed to capture dynamic signatures and handwriting, and two general purpose tablets (Samsung Galaxy Note 10.1 and Samsung ATIV 7). For the two Samsung tablets, data is collected using both pen stylus and also the finger in order to study the performance of signature verification in a mobile scenario. Data was collected in two sessions for 65 subjects, and includes dynamic information of the signature, the full name and alpha numeric sequences. Skilled forgeries were also performed for signatures and full names. We also report a benchmark evaluation based on e-BioSign for person verification under three different real scenarios: 1) intra-device, 2) inter-device, and 3) mixed writing-tool. We have experimented the proposed benchmark using the main existing approaches for signature verification: feature- and time functions-based. As a result, new insights into the problem of signature biometrics in sensor-interoperable scenarios have been obtained, namely: the importance of specific methods for dealing with device interoperability, and the necessity of a deeper analysis on signatures acquired using the finger as the writing tool. This e-BioSign public database allows the research community to: 1) further analyse and develop signature verification systems in realistic scenarios, and 2) investigate towards a better understanding of the nature of the human handwriting when captured using electronic COTS devices in realistic conditions. PMID:28475590
Benchmarking desktop and mobile handwriting across COTS devices: The e-BioSign biometric database.
Tolosana, Ruben; Vera-Rodriguez, Ruben; Fierrez, Julian; Morales, Aythami; Ortega-Garcia, Javier
2017-01-01
This paper describes the design, acquisition process and baseline evaluation of the new e-BioSign database, which includes dynamic signature and handwriting information. Data is acquired from 5 different COTS devices: three Wacom devices (STU-500, STU-530 and DTU-1031) specifically designed to capture dynamic signatures and handwriting, and two general purpose tablets (Samsung Galaxy Note 10.1 and Samsung ATIV 7). For the two Samsung tablets, data is collected using both pen stylus and also the finger in order to study the performance of signature verification in a mobile scenario. Data was collected in two sessions for 65 subjects, and includes dynamic information of the signature, the full name and alpha numeric sequences. Skilled forgeries were also performed for signatures and full names. We also report a benchmark evaluation based on e-BioSign for person verification under three different real scenarios: 1) intra-device, 2) inter-device, and 3) mixed writing-tool. We have experimented the proposed benchmark using the main existing approaches for signature verification: feature- and time functions-based. As a result, new insights into the problem of signature biometrics in sensor-interoperable scenarios have been obtained, namely: the importance of specific methods for dealing with device interoperability, and the necessity of a deeper analysis on signatures acquired using the finger as the writing tool. This e-BioSign public database allows the research community to: 1) further analyse and develop signature verification systems in realistic scenarios, and 2) investigate towards a better understanding of the nature of the human handwriting when captured using electronic COTS devices in realistic conditions.
Verification of the NWP models operated at ICM, Poland
NASA Astrophysics Data System (ADS)
Melonek, Malgorzata
2010-05-01
Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw (ICM) started its activity in the field of NWP in May 1997. Since this time the numerical weather forecasts covering Central Europe have been routinely published on our publicly available website. First NWP model used in ICM was hydrostatic Unified Model developed by the UK Meteorological Office. It was a mesoscale version with horizontal resolution of 17 km and 31 levels in vertical. At present two NWP non-hydrostatic models are running in quasi-operational regime. The main new UM model with 4 km horizontal resolution, 38 levels in vertical and forecats range of 48 hours is running four times a day. Second, the COAMPS model (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by the US Naval Research Laboratory, configured with the three nested grids (with coresponding resolutions of 39km, 13km and 4.3km, 30 vertical levels) are running twice a day (for 00 and 12 UTC). The second grid covers Central Europe and has forecast range of 84 hours. Results of the both NWP models, ie. COAMPS computed on 13km mesh resolution and UM, are verified against observations from the Polish synoptic stations. Verification uses surface observations and nearest grid point forcasts. Following meteorological elements are verified: air temperature at 2m, mean sea level pressure, wind speed and wind direction at 10 m and 12 hours accumulated precipitation. There are presented different statistical indices. For continous variables Mean Error(ME), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) in 6 hours intervals are computed. In case of precipitation the contingency tables for different thresholds are computed and some of the verification scores such as FBI, ETS, POD, FAR are graphically presented. The verification sample covers nearly one year.
Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...
2018-02-26
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
Faster and More Accurate Transport Procedures for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.
2010-01-01
Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grenier, Christophe; Anbergen, Hauke; Bense, Victor
In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less
Assessment of the National Combustion Code
NASA Technical Reports Server (NTRS)
Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing
2007-01-01
The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.
Thermal acoustic oscillations, volume 2. [cryogenic fluid storage
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Sims, W. H.; Fan, C.
1975-01-01
A number of thermal acoustic oscillation phenomena and their effects on cryogenic systems were studied. The conditions which cause or suppress oscillations, the frequency, amplitude and intensity of oscillations when they exist, and the heat loss they induce are discussed. Methods of numerical analysis utilizing the digital computer were developed for use in cryogenic systems design. In addition, an experimental verification program was conducted to study oscillation wave characteristics and boiloff rate. The data were then reduced and compared with the analytical predictions.
FY16 NRL DoD High Performance Computing Modernization Program Annual Reports
2017-09-15
explored both wind and wave forcing in the numerical wave tank. The model uses high spatial and temporal resolution and a multi-phase formulation to...Results: The ADVED_NS code was used to predict the effect of the standoff distance between micron- diameter wires and flow frequency on the total...contours for a flow over 3D wire mesh. Figure 2 shows verifications comparing computed and theoretical drag forces for the flow over two cylinders in an
Aerothermal modeling program, phase 2
NASA Technical Reports Server (NTRS)
Mongia, H. C.; Patankar, S. V.; Murthy, S. N. B.; Sullivan, J. P.; Samuelsen, G. S.
1985-01-01
The main objectives of the Aerothermal Modeling Program, Phase 2 are: to develop an improved numerical scheme for incorporation in a 3-D combustor flow model; to conduct a benchmark quality experiment to study the interaction of a primary jet with a confined swirling crossflow and to assess current and advanced turbulence and scalar transport models; and to conduct experimental evaluation of the air swirler interaction with fuel injectors, assessments of current two-phase models, and verification the improved spray evaporation/dispersion models.
1980-05-28
Total Deviation Angles and Measured Inlet Axial Velocity . . . . 55 ix LIST OF FIGURES (Continued) Figure Page 19 Points Defining Blade Sections of...distance from leading edge to point of maximum camber along chord line ar tip vortex core radius AVR axial velocity ratio (Vx /V x c chord length CL tip...yaw ceoefficient d longitudinal distance from leading edge to tip vortex calculation point G distance from chord line to maximum camber point K cascade
CFD Aerothermodynamic Characterization Of The IXV Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Roncioni, P.; Ranuzzi, G.; Marini, M.; Battista, F.; Rufolo, G. C.
2011-05-01
In this paper, and in the framework of the ESA technical assistance activities for IXV project, the numerical activities carried out by ASI/CIRA to support the development of Aerodynamic and Aerothermodynamic databases, independent from the ones developed by the IXV Industrial consortium, are reported. A general characterization of the IXV aerothermodynamic environment has been also provided for cross checking and verification purposes. The work deals with the first year activities of Technical Assistance Contract agreed between the Italian Space Agency/CIRA and ESA.
NASA Technical Reports Server (NTRS)
Ross, H. D.; Schiller, D. N.; Disimile, P.; Sirignano, W. A.
1989-01-01
The temperature and velocity fields have been investigated for a single-phase gas system and a two-layer gas-and-liquid system enclosed in a circular cylinder being heated suddenly and nonuniformly from above. The transient response of the gas, liquid, and container walls was modelled numerically in normal and reduced gravity (10 to the -5 g). Verification of the model was accomplished via flow visualization experiments in 10 cm high by 10 cm diameter plexiglass cylinders.
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Lo, R. Y.
1987-01-01
Modeling of SEU has been done in a CMOS static RAM containing 1-micron-channel-length transistors fabricated from a p-well epilayer process using both circuit-simulation and numerical-simulation techniques. The modeling results have been experimentally verified with the aid of heavy-ion beams obtained from a three-stage tandem van de Graaff accelerator. Experimental evidence for a novel SEU mode in an ON n-channel device is presented.
NASA Astrophysics Data System (ADS)
Jiang, Houshuo; Grosenbaugh, Mark A.
2002-11-01
Numerical simulations are used to study the laminar vortex ring formation in the presence of background flow. The numerical setup includes a round-headed axisymmetric body with a sharp-wedged opening at the posterior end where a column of fluid is pushed out by a piston inside the body. The piston motion is explicitly included into the simulations by using a deforming mesh. The numerical method is verified by simulating the standard vortex ring formation process in quiescent fluid for a wide range of piston stroke to cylinder diameter ratios (Lm/D). The results from these simulations confirm the existence of a universal formation time scale (formation number) found by others from experimental and numerical studies. For the case of vortex ring formation by the piston/cylinder arrangement in a constant background flow (i.e. the background flow is in the direction of the piston motion), the results show that a smaller fraction of the ejected circulation is delivered into the leading vortex ring, thereby decreasing the formation number. The mechanism behind this reduction is believed to be related to the modification of the shear layer profile between the jet flow and the background flow by the external boundary layer on the outer surface of the cylinder. In effect, the vorticity in the jet is cancelled by the opposite signed vorticity in the external boundary layer. Simulations using different end geometries confirm the general nature of the phenomenon. The thrust generated from the jet and the drag forces acting on the body are calculated with and without background flow for different piston programs. The implications of these results for squid propulsion are discussed.
2015-05-01
Evaluation Center of Excellence SUAS Small Unmanned Aircraft System SUT System under Test T&E Test and Evaluation TARDEC Tank Automotive Research...17 Distribution A: Distribution Unlimited 2 Background In the past decade, unmanned systems have significantly impacted warfare...environments at a speed and scale beyond manned capability. However, current unmanned systems operate with minimal autonomy. To meet warfighter needs and
Navy DDG-51 and DDG-1000 Destroyer Programs: Background and Issues for Congress
2013-10-22
two technologies previously identified as the most challenging — digital-beam-forming and transmit-receive modules—have been demonstrated in a...job of coming up with an affordable solution to a leap-ahead capability for the fleet.”31 In his presentation, Vandroff showed a slide comparing the...foreign ballistic missile data in support of international treaty verification. CJR represents an integrated mission solution : ship, radar suite, and
Verification of Disarmament or Limitation of Armaments: Instruments, Negotiations, Proposals
1992-05-01
explosions and may complicate the process of detection. An even greater difficulty faced by seismologists is the ambient background of seismic "noise...suspected event would be a complex operation. It would consist of surveys of the area of the presumed nuclear explosion in order to measure ambient ...Draft Resolution to the OAS General Assembly, June 1991 and OAS Resolution "Cooperacion para la seguridad en el hemisferio. Limitacion de la
DOE Office of Scientific and Technical Information (OSTI.GOV)
ADAMS, WADE C
At Pennsylvania Department of Environmental Protection's request, ORAU's IEAV program conducted verification surveys on the excavated surfaces of Section 3, SUs 1, 4, and 5 at the Whittaker site on March 13 and 14, 2013. The survey activities included visual inspections, gamma radiation surface scans, gamma activity measurements, and soil sampling activities. Verification activities also included the review and assessment of the licensee's project documentation and methodologies. Surface scans identified four areas of elevated direct gamma radiation distinguishable from background; one area within SUs 1 and 4 and two areas within SU5. One area within SU5 was remediated by removingmore » a golf ball size piece of slag while ORAU staff was onsite. With the exception of the golf ball size piece of slag within SU5, a review of the ESL Section 3 EXS data packages for SUs 1, 4, and 5 indicated that these locations of elevated gamma radiation were also identified by the ESL gamma scans and that ESL personnel performed additional investigations and soil sampling within these areas. The investigative results indicated that the areas met the release criteria.« less
Doebling, Scott William
2016-10-22
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
Development of an Unstructured, Three-Dimensional Material Response Design Tool
NASA Technical Reports Server (NTRS)
Schulz, Joseph; Stern, Eric; Palmer, Grant; Muppidi, Suman; Schroeder, Olivia
2017-01-01
A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. The extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries as well as multi-dimensional physics, which have been shown to be important in some scenarios and are not captured by one-dimensional models. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.
Verification Test of the SURF and SURFplus Models in xRage: Part III Affect of Mesh Alignment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
The previous studies used an underdriven detonation wave in 1-dimension (steady ZND reaction zone profile followed by a scale-invariant rarefaction wave) for PBX 9502 as a verification test of the implementation of the SURF and SURFplus models in the xRage code. Since the SURF rate is a function of the lead shock pressure, the question arises as to the effect on accuracy of variations in the detected shock pressure due to the alignment of the shock front with the mesh. To study the effect of mesh alignment we simulate a cylindrically diverging detonation wave using a planar 2-D mesh. Themore » leading issue is the magnitude of azimuthal asymmetries in the numerical solution. The 2-D test case does not have an exact analytic solution. To quantify the accuracy, the 2-D solution along rays through the origin are compared to a highly resolved 1-D simulation in cylindrical geometry.« less
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Haddad, Matthew
2009-01-01
An alarming number of practicing medical professionals and healthcare staffers across the nation may have criminal backgrounds, jeopardizing the health of hundreds of millions of patients and compromising the integrity of healthcare in this country. An investigation conducted by The Los Angeles Times found that an extraordinary number of nurses in California with criminal backgrounds had been allowed to continue working in healthcare facilities for years--their crimes virtually swept under the rug. This article suggests that continuous monitoring of healthcare credentials can mitigate the potential harm posed by credentialing fraud, recommending 24/7 monitoring in real-time as opposed to once every year or two as is the current practice. This would include verification of provider licenses, Drug Enforcement Administration certification, Office of Inspector General status, and criminal offenses. Automatic and continuous monitoring of licenses and other databases for changes and lapses, and reports on issues that are uncovered, help to prevent harmful acts on the part of healthcare providers with questionable backgrounds.
An improved algorithm of laser spot center detection in strong noise background
NASA Astrophysics Data System (ADS)
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
2018-01-01
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diehl, H. T.; Buckley-Geer, E. J.; Lindgren, K. A.
We report the results of searches for strong gravitational lens systems in the Dark Energy Survey (DES) Science Verification and Year 1 observations. The Science Verification data span approximately 250 sq. deg. with a median i -band limiting magnitude for extended objects (10 σ ) of 23.0. The Year 1 data span approximately 2000 sq. deg. and have an i -band limiting magnitude for extended objects (10 σ ) of 22.9. As these data sets are both wide and deep, they are particularly useful for identifying strong gravitational lens candidates. Potential strong gravitational lens candidate systems were initially identified basedmore » on a color and magnitude selection in the DES object catalogs or because the system is at the location of a previously identified galaxy cluster. Cutout images of potential candidates were then visually scanned using an object viewer and numerically ranked according to whether or not we judged them to be likely strong gravitational lens systems. Having scanned nearly 400,000 cutouts, we present 374 candidate strong lens systems, of which 348 are identified for the first time. We provide the R.A. and decl., the magnitudes and photometric properties of the lens and source objects, and the distance (radius) of the source(s) from the lens center for each system.« less
NASA Astrophysics Data System (ADS)
Diehl, H. T.; Buckley-Geer, E. J.; Lindgren, K. A.; Nord, B.; Gaitsch, H.; Gaitsch, S.; Lin, H.; Allam, S.; Collett, T. E.; Furlanetto, C.; Gill, M. S. S.; More, A.; Nightingale, J.; Odden, C.; Pellico, A.; Tucker, D. L.; da Costa, L. N.; Fausti Neto, A.; Kuropatkin, N.; Soares-Santos, M.; Welch, B.; Zhang, Y.; Frieman, J. A.; Abdalla, F. B.; Annis, J.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; Desai, S.; Dietrich, J. P.; Drlica-Wagner, A.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Kuehn, K.; Kuhlmann, S.; Lahav, O.; Li, T. S.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Menanteau, F.; Miquel, R.; Nichol, R. C.; Nugent, P.; Ogando, R. L. C.; Plazas, A. A.; Reil, K.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.; DES Collaboration
2017-09-01
We report the results of searches for strong gravitational lens systems in the Dark Energy Survey (DES) Science Verification and Year 1 observations. The Science Verification data span approximately 250 sq. deg. with a median I-band limiting magnitude for extended objects (10σ) of 23.0. The Year 1 data span approximately 2000 sq. deg. and have an I-band limiting magnitude for extended objects (10σ) of 22.9. As these data sets are both wide and deep, they are particularly useful for identifying strong gravitational lens candidates. Potential strong gravitational lens candidate systems were initially identified based on a color and magnitude selection in the DES object catalogs or because the system is at the location of a previously identified galaxy cluster. Cutout images of potential candidates were then visually scanned using an object viewer and numerically ranked according to whether or not we judged them to be likely strong gravitational lens systems. Having scanned nearly 400,000 cutouts, we present 374 candidate strong lens systems, of which 348 are identified for the first time. We provide the R.A. and decl., the magnitudes and photometric properties of the lens and source objects, and the distance (radius) of the source(s) from the lens center for each system.
Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.
2018-01-01
Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.
Szajek, Krzysztof; Wierszycki, Marcin
2016-01-01
Dental implant designing is a complex process which considers many limitations both biological and mechanical in nature. In earlier studies, a complete procedure for improvement of two-component dental implant was proposed. However, the optimization tasks carried out required assumption on representative load case, which raised doubts on optimality for the other load cases. This paper deals with verification of the optimal design in context of fatigue life and its main goal is to answer the question if the assumed load scenario (solely horizontal occlusal load) leads to the design which is also "safe" for oblique occlussal loads regardless the angle from an implant axis. The verification is carried out with series of finite element analyses for wide spectrum of physiologically justified loads. The design of experiment methodology with full factorial technique is utilized. All computations are done in Abaqus suite. The maximal Mises stress and normalized effective stress amplitude for various load cases are discussed and compared with the assumed "safe" limit (equivalent of fatigue life for 5e6 cycles). The obtained results proof that coronial-appical load component should be taken into consideration in the two component dental implant when fatigue life is optimized. However, its influence in the analyzed case is small and does not change the fact that the fatigue life improvement is observed for all components within whole range of analyzed loads.
Numerical Studies of an Array of Fluidic Diverter Actuators for Flow Control
NASA Technical Reports Server (NTRS)
Gokoglu, Suleyman A.; Kuczmarski, Maria A.; Culley, Dennis E.; Raghu, Surya
2011-01-01
In this paper, we study the effect of boundary conditions on the behavior of an array of uniformly-spaced fluidic diverters with an ultimate goal to passively control their output phase. This understanding will aid in the development of advanced designs of actuators for flow control applications in turbomachinery. Computations show that a potential design is capable of generating synchronous outputs for various inlet boundary conditions if the flow inside the array is initiated from quiescence. However, when the array operation is originally asynchronous, several approaches investigated numerically demonstrate that re-synchronization of the actuators in the array is not practical since it is very sensitive to asymmetric perturbations and imperfections. Experimental verification of the insights obtained from the present study is currently being pursued.
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
Hu, Rui
2016-11-19
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
Keeping the Momentum and Nuclear Forensics at Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, Robert Ernest; Dion, Heather M.; Dry, Donald E.
LANL has 70 years of experience in nuclear forensics and supports the community through a wide variety of efforts and leveraged capabilities: Expanding the understanding of nuclear forensics, providing training on nuclear forensics methods, and developing bilateral relationships to expand our understanding of nuclear forensic science. LANL remains highly supportive of several key organizations tasked with carrying forth the Nuclear Security Summit messages: IAEA, GICNT, and INTERPOL. Analytical chemistry measurements on plutonium and uranium matrices are critical to numerous programs including safeguards accountancy verification measurements. Los Alamos National Laboratory operates capable actinide analytical chemistry and material science laboratories suitable formore » nuclear material and environmental forensic characterization. Los Alamos National Laboratory uses numerous means to validate and independently verify that measurement data quality objectives are met. Numerous LANL nuclear facilities support the nuclear material handling, preparation, and analysis capabilities necessary to evaluate samples containing nearly any mass of an actinide (attogram to kilogram levels).« less
NASA Astrophysics Data System (ADS)
Krejsa, M.; Brozovsky, J.; Mikolasek, D.; Parenica, P.; Koubova, L.
2018-04-01
The paper is focused on the numerical modeling of welded steel bearing elements using commercial software system ANSYS, which is based on the finite element method - FEM. It is important to check and compare the results of FEM analysis with the results of physical verification test, in which the real behavior of the bearing element can be observed. The results of the comparison can be used for calibration of the computational model. The article deals with the physical test of steel supporting elements, whose main purpose is obtaining of material, geometry and strength characteristics of the fillet and butt welds including heat affected zone in the basic material of welded steel bearing element. The pressure test was performed during the experiment, wherein the total load value and the corresponding deformation of the specimens under the load was monitored. Obtained data were used for the calibration of numerical models of test samples and they are necessary for further stress and strain analysis of steel supporting elements.
NASA Astrophysics Data System (ADS)
Wang, Hanxiong; Liu, Liping; Liu, Dong
2017-03-01
The equilibrium shape of a bubble/droplet in an electric field is important for electrowetting over dielectrics (EWOD), electrohydrodynamic (EHD) enhancement for heat transfer and electro-deformation of a single biological cell among others. In this work, we develop a general variational formulation in account of electro-mechanical couplings. In the context of EHD, we identify the free energy functional and the associated energy minimization problem that determines the equilibrium shape of a bubble in an electric field. Based on this variational formulation, we implement a fixed mesh level-set gradient method for computing the equilibrium shapes. This numerical scheme is efficient and validated by comparing with analytical solutions at the absence of electric field and experimental results at the presence of electric field. We also present simulation results for zero gravity which will be useful for space applications. The variational formulation and numerical scheme are anticipated to have broad applications in areas of EWOD, EHD and electro-deformation in biomechanics.
Validation Database Based Thermal Analysis of an Advanced RPS Concept
NASA Technical Reports Server (NTRS)
Balint, Tibor S.; Emis, Nickolas D.
2006-01-01
Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.
Miniaci, Marco; Marzani, Alessandro; Testoni, Nicola; De Marchi, Luca
2015-02-01
In this work the existence of band gaps in a phononic polyvinyl chloride (PVC) plate with a square lattice of cross-like holes is numerically and experimentally investigated. First, a parametric analysis is carried out to find plate thickness and cross-like holes dimensions capable to nucleate complete band gaps. In this analysis the band structures of the unitary cell in the first Brillouin zone are computed by exploiting the Bloch-Floquet theorem. Next, time transient finite element analyses are performed to highlight the shielding effect of a finite dimension phononic region, formed by unitary cells arranged into four concentric square rings, on the propagation of guided waves. Finally, ultrasonic experimental tests in pitch-catch configuration across the phononic region, machined on a PVC plate, are executed and analyzed. Very good agreement between numerical and experimental results are found confirming the existence of the predicted band gaps. Copyright © 2014 Elsevier B.V. All rights reserved.
Springback evaluation of friction stir welded TWB automotive sheets
NASA Astrophysics Data System (ADS)
Kim, Junehyung; Lee, Wonoh; Chung, Kyung-Hwan; Kim, Daeyong; Kim, Chongmin; Okamoto, Kazutaka; Wagoner, R. H.; Chung, Kwansoo
2011-02-01
Springback behavior of automotive friction stir welded TWB (tailor welded blank) sheets was experimentally investigated and the springback prediction capability of the constitutive law was numerically validated. Four automotive sheets, aluminum alloy 6111-T4, 5083-H18, 5083-O and dual-phase DP590 steel sheets, each having one or two different thicknesses, were considered. To represent mechanical properties, the modified Chaboche type combined isotropic-kinematic hardening law was utilized along with the non-quadratic orthogonal anisotropic yield function, Yld2000-2d, while the anisotropy of the weld zone was ignored for simplicity. For numerical simulations, mechanical properties previously characterized [1] were applied. For validation purposes, three springback tests including the unconstrained cylindrical bending, 2-D draw bending and OSU draw-bend tests were carried out. The numerical method performed reasonably well in analyzing all verification tests and it was confirmed that the springback of TWB as well as of base samples is significantly affected by the ratio of the yield stress with respect to Young's modulus and thickness.
NASA Astrophysics Data System (ADS)
Tsivilskiy, I. V.; Nagulin, K. Yu.; Gilmutdinov, A. Kh.
2016-02-01
A full three-dimensional nonstationary numerical model of graphite electrothermal atomizers of various types is developed. The model is based on solution of a heat equation within solid walls of the atomizer with a radiative heat transfer and numerical solution of a full set of Navier-Stokes equations with an energy equation for a gas. Governing equations for the behavior of a discrete phase, i.e., atomic particles suspended in a gas (including gas-phase processes of evaporation and condensation), are derived from the formal equations molecular kinetics by numerical solution of the Hertz-Langmuir equation. The following atomizers test the model: a Varian standard heated electrothermal vaporizer (ETV), a Perkin Elmer standard THGA transversely heated graphite tube with integrated platform (THGA), and the original double-stage tube-helix atomizer (DSTHA). The experimental verification of computer calculations is carried out by a method of shadow spectral visualization of the spatial distributions of atomic and molecular vapors in an analytical space of an atomizer.
NASA Astrophysics Data System (ADS)
Nechaykina, T.; Nikulin, S.; Rozhnov, A.; Molotnikov, A.; Zavodchikov, S.; Estrin, Y.
2018-05-01
Vanadium alloys are promising structural materials for fuel cladding tubes for fast-neutron reactors. However, high solubility of oxygen and nitrogen in vanadium alloys at operating temperatures of 700 °C limits their application. In this work, we present a novel composite structure consisting of vanadium alloy V-4Ti-4Cr (provides high long-term strength of the material) and stainless steel Fe-0.2C-13Cr (as a corrosion resistant protective layer). It is produced by co-extrusion of these materials forming a three-layered tube. Finite element simulations were utilised to explore the influence of the various co-extrusion parameters on manufacturability of multi-layered tubes. Experimental verification of the numerical modelling was performed using co-extrusion with the process parameters suggested by the numerical simulations. Scanning electron microscopy and microhardness measurements revealed a defect-free diffusion layer at the interfaces between both materials indicating a good quality bonding for these co-extrusion conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less
Numerical simulation and experimental verification of extended source interferometer
NASA Astrophysics Data System (ADS)
Hou, Yinlong; Li, Lin; Wang, Shanshan; Wang, Xiao; Zang, Haijun; Zhu, Qiudong
2013-12-01
Extended source interferometer, compared with the classical point source interferometer, can suppress coherent noise of environment and system, decrease dust scattering effects and reduce high-frequency error of reference surface. Numerical simulation and experimental verification of extended source interferometer are discussed in this paper. In order to provide guidance for the experiment, the modeling of the extended source interferometer is realized by using optical design software Zemax. Matlab codes are programmed to rectify the field parameters of the optical system automatically and get a series of interferometric data conveniently. The communication technique of DDE (Dynamic Data Exchange) was used to connect Zemax and Matlab. Then the visibility of interference fringes can be calculated through adding the collected interferometric data. Combined with the simulation, the experimental platform of the extended source interferometer was established, which consists of an extended source, interference cavity and image collection system. The decrease of high-frequency error of reference surface and coherent noise of the environment is verified. The relation between the spatial coherence and the size, shape, intensity distribution of the extended source is also verified through the analysis of the visibility of interference fringes. The simulation result is in line with the result given by real extended source interferometer. Simulation result shows that the model can simulate the actual optical interference of the extended source interferometer quite well. Therefore, the simulation platform can be used to guide the experiment of interferometer which is based on various extended sources.
New analytical solutions to the two-phase water faucet problem
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-06-17
Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less
Manager's Role in Electromagnetic Interference (EMI) Control
NASA Technical Reports Server (NTRS)
Sargent, Noel B.; Lewis, Catherine C.
2013-01-01
This presentation captures the essence of electromagnetic compatibility (EMC) engineering from a project manager's perspective. It explains the basics of EMC and the benefits to the project of early incorporation of EMC best practices. The EMC requirement products during a project life cycle are identified, along with the requirement verification methods that should be utilized. The goal of the presentation is to raise awareness and simplify the mystique surrounding electromagnetic compatibility for managers that have little or no electromagnetics background
Continuous Energy Photon Transport Implementation in MCATK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Terry R.; Trahan, Travis John; Sweezy, Jeremy Ed
2016-10-31
The Monte Carlo Application ToolKit (MCATK) code development team has implemented Monte Carlo photon transport into the MCATK software suite. The current particle transport capabilities in MCATK, which process the tracking and collision physics, have been extended to enable tracking of photons using the same continuous energy approximation. We describe the four photoatomic processes implemented, which are coherent scattering, incoherent scattering, pair-production, and photoelectric absorption. The accompanying background, implementation, and verification of these processes will be presented.
Menlove, Howard Olsen; Belian, Anthony P.; Geist, William H.; ...
2017-10-07
The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd 2O 3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate themore » 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. Here, we have named the system the fast-neutron passive collar (FNPC).« less
SU-C-207A-04: Accuracy of Acoustic-Based Proton Range Verification in Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, KC; Sehgal, CM; Avery, S
2016-06-15
Purpose: To determine the accuracy and dose required for acoustic-based proton range verification (protoacoustics) in water. Methods: Proton pulses with 17 µs FWHM and instantaneous currents of 480 nA (5.6 × 10{sup 7} protons/pulse, 8.9 cGy/pulse) were generated by a clinical, hospital-based cyclotron at the University of Pennsylvania. The protoacoustic signal generated in a water phantom by the 190 MeV proton pulses was measured with a hydrophone placed at multiple known positions surrounding the dose deposition. The background random noise was measured. The protoacoustic signal was simulated to compare to the experiments. Results: The maximum protoacoustic signal amplitude at 5more » cm distance was 5.2 mPa per 1 × 10{sup 7} protons (1.6 cGy at the Bragg peak). The background random noise of the measurement was 27 mPa. Comparison between simulation and experiment indicates that the hydrophone introduced a delay of 2.4 µs. For acoustic data collected with a signal-to-noise ratio (SNR) of 21, deconvolution of the protoacoustic signal with the proton pulse provided the most precise time-of-flight range measurement (standard deviation of 2.0 mm), but a systematic error (−4.5 mm) was observed. Conclusion: Based on water phantom measurements at a clinical hospital-based cyclotron, protoacoustics is a potential technique for measuring the proton Bragg peak range with 2.0 mm standard deviation. Simultaneous use of multiple detectors is expected to reduce the standard deviation, but calibration is required to remove systematic error. Based on the measured background noise and protoacoustic amplitude, a SNR of 5.3 is projected for a deposited dose of 2 Gy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menlove, Howard Olsen; Belian, Anthony P.; Geist, William H.
The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd 2O 3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate themore » 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. Here, we have named the system the fast-neutron passive collar (FNPC).« less
NASA Astrophysics Data System (ADS)
Menlove, Howard; Belian, Anthony; Geist, William; Rael, Carlos
2018-01-01
The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd2O3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate the 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. We have named the system the fast-neutron passive collar (FNPC).
2013-01-01
Background A growing number of online pharmacies have been established worldwide. Among them are numerous illegal websites selling medicine without valid medical prescriptions or distributing substandard or counterfeit drugs. Only a limited number of studies have been published on Internet pharmacies with regard to patient safety, professionalism, long-term follow-up, and pharmaceutical legitimacy verification. Objective In this study, we selected, evaluated, and followed 136 Internet pharmacy websites aiming to identify indicators of professional online pharmacy service and online medication safety. Methods An Internet search was performed by simulating the needs of potential customers of online pharmacies. A total of 136 Internet pharmacy websites were assessed and followed for four years. According to the LegitScript database, relevant characteristics such as longevity, time of continuous operation, geographical location, displayed contact information, prescription requirement, medical information exchange, and pharmaceutical legitimacy verification were recorded and evaluated. Results The number of active Internet pharmacy websites decreased; 23 of 136 (16.9%) online pharmacies ceased operating within 12 months and only 67 monitored websites (49.3%) were accessible at the end of the four-year observation period. However, not all operated continuously, as about one-fifth (31/136) of all observed online pharmacy websites were inaccessible provisionally. Thus, only 56 (41.2%) Internet-based pharmacies were continuously operational. Thirty-one of the 136 online pharmacies (22.8%) had not provided any contact details, while only 59 (43.4%) displayed all necessary contact information on the website. We found that the declared physical location claims did not correspond to the area of domain registration (according to IP address) for most websites. Although the majority (120/136, 88.2%) of the examined Internet pharmacies distributed various prescription-only medicines, only 9 (6.6%) requested prior medical prescriptions before purchase. Medical information exchange was generally ineffective as 52 sites (38.2%) did not require any medical information from patients. The product information about the medicines was generally (126/136, 92.6%) not displayed adequately, and the contents of the patient information leaflet were incomplete in most cases (104/136, 76.5%). Numerous online operators (60/136, 44.1%) were defined as rogue Internet pharmacies, but no legitimate Internet-based pharmacies were among them. One site (0.7%) was yet unverified, 23 (16.9%) were unapproved, while the remaining (52/136, 38.2%) websites were not available in the LegitScript database. Contrary to our prior assumptions, prescription or medical information requirement, or the indication of contact information on the website, does not seem to correlate with “rogue pharmacy” status using the LegitScript online pharmacy verification standards. Instead, long-term continuous operation strongly correlated (P<.001) with explicit illegal activity. Conclusions Most Internet pharmacies in our study sample were illegal sites within the definition of “rogue” Internet pharmacy. These websites violate professional, legal, and ethical standards and endanger patient safety. This work shows evidence that online pharmacies that act illegally appear to have greater longevity than others, presumably because there is no compelling reason for frequent change in order to survive. We also found that one in five websites revived (closed down and reopened again within four years) and no-prescription sites with limited medicine and patient information are flourishing. PMID:24021777
NASA Astrophysics Data System (ADS)
Parvasi, Seyed Mohammad; Xu, Changhang; Kong, Qingzhao; Song, Gangbing
2016-05-01
Ultrasonic vibrations in cracked structures generate heat at the location of defects mainly due to frictional rubbing and viscoelastic losses at the defects. Vibrothermography is an effective nondestructive evaluation method which uses infrared imaging (IR) techniques to locate defects such as cracks and delaminations by detecting the heat generated at the defects. In this paper a coupled thermo-electro-mechanical analysis with the use of implicit finite element method was used to simulate a low power (10 W) piezoceramic-based ultrasonic actuator and the corresponding heat generation in a metallic plate with multiple surface cracks. Numerical results show that the finite element software Abaqus can be used to simultaneously model the electrical properties of the actuator, the ultrasonic waves propagating within the plate, as well as the thermal properties of the plate. Obtained numerical results demonstrate the ability of these low power transducers in detecting multiple cracks in the simulated aluminum plate. The validity of the numerical simulations was verified through experimental studies on a physical aluminum plate with multiple surface cracks while the same low power piezoceramic stack actuator was used to excite the plate and generate heat at the cracks. An excellent qualitative agreement exists between the experimental results and the numerical simulation’s results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less
NASA Astrophysics Data System (ADS)
Yako, Motoki; Ishikawa, Yasuhiko; Wada, Kazumi
2018-05-01
A method for reduction of threading dislocation density (TDD) in lattice-mismatched heteroepitaxy is proposed, and the reduction is experimentally verified for Ge on Si. Flat-top epitaxial layers are formed through coalescences of non-planar selectively grown epitaxial layers, and enable the TDD reduction in terms of image force. Numerical calculations and experiments for Ge on Si verify the TDD reduction by this method. The method should be applicable to not only Ge on Si but also other lattice-mismatched heteroepitaxy such as III-V on Si.
Cymatics for the cloaking of flexural vibrations in a structured plate
Misseroni, D.; Colquitt, D. J.; Movchan, A. B.; Movchan, N. V.; Jones, I. S.
2016-01-01
Based on rigorous theoretical findings, we present a proof-of-concept design for a structured square cloak enclosing a void in an elastic lattice. We implement high-precision fabrication and experimental testing of an elastic invisibility cloak for flexural waves in a mechanical lattice. This is accompanied by verifications and numerical modelling performed through finite element simulations. The primary advantage of our square lattice cloak, over other designs, is the straightforward implementation and the ease of construction. The elastic lattice cloak, implemented experimentally, shows high efficiency. PMID:27068339
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2013-07-01
The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification.
NASA Technical Reports Server (NTRS)
Warren, Wayne H., Jr.; Ochsenbein, Francois; Rappaport, Barry N.
1990-01-01
The entire series of Durchmusterung (DM) catalogs (Bonner, Southern, Cordoba, Cape Photographic) has been computerized through a collaborative effort among institutions and individuals in France and the United States of America. Complete verification of the data, both manually and by computer, the inclusion of all supplemental stars (represented by lower case letters), complete representation of all numerical data, and a consistent format for all catalogs, should make this collection of machine-readable data a valuable addition to digitized astronomical archives.
Verification and Validation of the Spring Model Parachute Air Delivery System in Subsonic Flow
2015-02-27
putational challenges in handling the geometric complexities of the parachute canopy and the contact between parachutes in a cluster. Kim and Peskin et...Runge-Kutta method with numerical flux evaluated by 5-th order WENO scheme. The equations for k and ε are discretized with Crank -Nicolson scheme to...construction formula uk+1i = f ( uki−3, u k i−2, u k i−1, u k i , u k,poro i+1 , u k,poro i+2 , u k,poro i+3 ) . Diffusion part is solved using Crank
Improvements and applications of COBRA-TF for stand-alone and coupled LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, M.; Cuervo, D.; Ivanov, K.
2006-07-01
The advanced thermal-hydraulic subchannel code COBRA-TF has been recently improved and applied for stand-alone and coupled LWR core calculations at the Pennsylvania State Univ. in cooperation with AREVA NP GmbH (Germany)) and the Technical Univ. of Madrid. To enable COBRA-TF for academic and industrial applications including safety margins evaluations and LWR core design analyses, the code programming, numerics, and basic models were revised and substantially improved. The code has undergone through an extensive validation, verification, and qualification program. (authors)
Design of ground test suspension systems for verification of flexible space structures
NASA Technical Reports Server (NTRS)
Cooley, V. M.; Juang, J. N.; Ghaemmaghami, P.
1988-01-01
A simple model demonstrates the frequency-increasing effects of a simple cable suspension on flexible test article/suspension systems. Two passive suspension designs, namely a negative spring mechanism and a rolling cart mechanism, are presented to alleviate the undesirable frequency-increasing effects. Analysis methods are provided for systems in which the augmentations are applied to both discrete and continuous representations of test articles. The damping analyses are based on friction equivalent viscous damping. Numerical examples are given for comparing the two augmentations with respect to minimizing frequency and damping increases.
Current Results and Proposed Activities in Microgravity Fluid Dynamics
NASA Technical Reports Server (NTRS)
Polezhaev, V. I.
1996-01-01
The Institute for Problems in Mechanics' Laboratory work in mathematical and physical modelling of fluid mechanics develops models, methods, and software for analysis of fluid flow, instability analysis, direct numerical modelling and semi-empirical models of turbulence, as well as experimental research and verification of these models and their applications in technological fluid dynamics, microgravity fluid mechanics, geophysics, and a number of engineering problems. This paper presents an overview of the results in microgravity fluid dynamics research during the last two years. Nonlinear problems of weakly compressible and compressible fluid flows are discussed.
Micro Computer Tomography for medical device and pharmaceutical packaging analysis.
Hindelang, Florine; Zurbach, Raphael; Roggo, Yves
2015-04-10
Biomedical device and medicine product manufacturing are long processes facing global competition. As technology evolves with time, the level of quality, safety and reliability increases simultaneously. Micro Computer Tomography (Micro CT) is a tool allowing a deep investigation of products: it can contribute to quality improvement. This article presents the numerous applications of Micro CT for medical device and pharmaceutical packaging analysis. The samples investigated confirmed CT suitability for verification of integrity, measurements and defect detections in a non-destructive manner. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Austin, F.; Markowitz, J.; Goldenberg, S.; Zetkov, G. A.
1973-01-01
The formulation of a mathematical model for predicting the dynamic behavior of rotating flexible space station configurations was conducted. The overall objectives of the study were: (1) to develop the theoretical techniques for determining the behavior of a realistically modeled rotating space station, (2) to provide a versatile computer program for the numerical analysis, and (3) to present practical concepts for experimental verification of the analytical results. The mathematical model and its associated computer program are described.
Analysis of human scream and its impact on text-independent speaker verification.
Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid
2017-04-01
Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.
Applications of a Fast Neutron Detector System to Verification of Special Nuclear Materials
NASA Astrophysics Data System (ADS)
Mayo, Douglas R.; Byrd, Roger C.; Ensslin, Norbert; Krick, Merlyn S.; Mercer, David J.; Miller, Michael C.; Prettyman, Thomas H.; Russo, Phyllis A.
1998-04-01
An array of boron-loaded plastic optically coupled to bismuth germanate scintillators has been developed to detect neutrons for measurement of special nuclear materials. The phoswiched detection system has the advantage of a high neutron detection efficiency and short die-away time. This is achieved by mixing the moderator (plastic) and the detector (^10B) at the molecular level. Simulations indicate that the neutron capture probabilities equal or exceed those of the current thermal neutron multiplicity techniques which have the moderator (polyethylene) and detectors (^3He gas proportional tubes) macroscopically separate. Experiments have been performed to characterize the response of these detectors and validate computer simulations. The fast neutron detection system may be applied to the quantitative assay of plutonium in high (α,n) backgrounds, with emphasis on safeguards and enviromental scenarios. Additional applications of the insturment, in a non-quantative mode, has been tested for possible verification activities involving dismantlement of nuclear weapons. A description of the detector system, simulations and preliminary data will be presented.
Satellite detection of oil on the marine surface
NASA Technical Reports Server (NTRS)
Wilson, M. J.; Oneill, P. E.; Estes, J. E.
1981-01-01
The ability of two widely dissimilar spaceborne imaging sensors to detect surface oil accumulations in the marine environment has been evaluated using broadly different techniques. Digital Landsat multispectral scanner (MSS) data consisting of two visible and two near infrared channels has been processed to enhance contrast between areas of known oil coverage and background clean surface water. These enhanced images have then been compared to surface verification data gathered by aerial reconnaissance during the October 15, 1975, Landsat overpass. A similar evaluation of oil slick imaging potential has been made for digitally enhanced Seasat-A synthetic aperture radar (SAR) data from July 18, 1979. Due to the premature failure of this satellite, however, no concurrent surface verification data were collected. As a substitute, oil slick configuration information has been generated for the comparison using meteorological and oceanographic data. The test site utilized in both studies was the extensive area of natural seepage located off Coal Oil Point, adjacent to the University of California, Santa Barbara.
Research on registration algorithm for check seal verification
NASA Astrophysics Data System (ADS)
Wang, Shuang; Liu, Tiegen
2008-03-01
Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.
Standards and Guidelines for Numerical Models for Tsunami Hazard Mitigation
NASA Astrophysics Data System (ADS)
Titov, V.; Gonzalez, F.; Kanoglu, U.; Yalciner, A.; Synolakis, C. E.
2006-12-01
An increased number of nations around the workd need to develop tsunami mitigation plans which invariably involve inundation maps for warning guidance and evacuation planning. There is the risk that inundation maps may be produced with older or untested methodology, as there are currently no standards for modeling tools. In the aftermath of the 2004 megatsunami, some models were used to model inundation for Cascadia events with results much larger than sediment records and existing state-of-the-art studies suggest leading to confusion among emergency management. Incorrectly assessing tsunami impact is hazardous, as recent events in 2006 in Tonga, Kythira, Greece and Central Java have suggested (Synolakis and Bernard, 2006). To calculate tsunami currents, forces and runup on coastal structures, and inundation of coastlines one must calculate the evolution of the tsunami wave from the deep ocean to its target site, numerically. No matter what the numerical model, validation (the process of ensuring that the model solves the parent equations of motion accurately) and verification (the process of ensuring that the model used represents geophysical reality appropriately) both are an essential. Validation ensures that the model performs well in a wide range of circumstances and is accomplished through comparison with analytical solutions. Verification ensures that the computational code performs well over a range of geophysical problems. A few analytic solutions have been validated themselves with laboratory data. Even fewer existing numerical models have been both validated with the analytical solutions and verified with both laboratory measurements and field measurements, thus establishing a gold standard for numerical codes for inundation mapping. While there is in principle no absolute certainty that a numerical code that has performed well in all the benchmark tests will also produce correct inundation predictions with any given source motions, validated codes reduce the level of uncertainty in their results to the uncertainty in the geophysical initial conditions. Further, when coupled with real--time free--field tsunami measurements from tsunameters, validated codes are the only choice for realistic forecasting of inundation; the consequences of failure are too ghastly to take chances with numerical procedures that have not been validated. We discuss a ten step process of benchmark tests for models used for inundation mapping. The associated methodology and algorithmes have to first be validated with analytical solutions, then verified with laboratory measurements and field data. The models need to be published in the scientific literature in peer-review journals indexed by ISI. While this process may appear onerous, it reflects our state of knowledge, and is the only defensible methodology when human lives are at stake. Synolakis, C.E., and Bernard, E.N, Tsunami science before and beyond Boxing Day 2004, Phil. Trans. R. Soc. A 364 1845, 2231--2263, 2005.
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Lu, Dianchen; Wang, Jun
2017-07-01
In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.
A Vibration-Based Strategy for Health Monitoring of Offshore Pipelines' Girth-Welds
Razi, Pejman; Taheri, Farid
2014-01-01
This study presents numerical simulations and experimental verification of a vibration-based damage detection technique. Health monitoring of a submerged pipe's girth-weld against an advancing notch is attempted. Piezoelectric transducers are bonded on the pipe for sensing or actuation purposes. Vibration of the pipe is excited by two means: (i) an impulsive force; (ii) using one of the piezoelectric transducers as an actuator to propagate chirp waves into the pipe. The methodology adopts the empirical mode decomposition (EMD), which processes vibration data to establish energy-based damage indices. The results obtained from both the numerical and experimental studies confirm the integrity of the approach in identifying the existence, and progression of the advancing notch. The study also discusses and compares the performance of the two vibration excitation means in damage detection. PMID:25225877
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2015-12-01
This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.
2011-09-01
m b e r o f O cc u rr e n ce s 50 ( a ) Kp 0-3 (b) Kp 4-9 Figure 25. Scatter plot of...dependent physics based model that uses the Ionospheric Forecast Model ( IFM ) as a background model upon which perturbations are imposed via a Kalman filter...vertical output resolution as the IFM . GAIM-GM can also be run in a regional mode with a finer resolution (Scherliess et al., 2006). GAIM-GM is
Investigation of flow in data rack
NASA Astrophysics Data System (ADS)
Manoch, Lukáš; Nožička, Jiří; Pohan, Petr
2012-04-01
The main purpose of this paper was to set up a functioning numerical model of data rack verified by an experimental measurement. The verification of the numerical model was carried out by means of the PIV method (Particle Image Velocimetry). The numerical model was "found" while using the assumed and preset values from the experimental measurement which represent boundary conditions. The server model was conceived as a four-channel with a controlled flow rate without simulation of heat transfer. The flow rate in each channel was implemented by means of pressure loss. The numerical model was further used for simulation of several phases and configurations of data rack (21U rack space) fitted with two server workstations Dell Precision R5400. The flow field in the inlet of data rack in the front of the workstations were observed and evaluated in such a way that a 2U-dimensional free space between the workstations was being left and the remaining inlet space was blanked-off/fully opened. The results of this paper will serve for designing optimization treatment of data rack from the viewpoint of cooling efficiency both within the data rack and within the data center design.
Verifying and Validating Simulation Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M.
2015-02-23
This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statisticalmore » sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.« less
Experimental quantum verification in the presence of temporally correlated noise
NASA Astrophysics Data System (ADS)
Mavadia, S.; Edmunds, C. L.; Hempel, C.; Ball, H.; Roy, F.; Stace, T. M.; Biercuk, M. J.
2018-02-01
Growth in the capabilities of quantum information hardware mandates access to techniques for performance verification that function under realistic laboratory conditions. Here we experimentally characterise the impact of common temporally correlated noise processes on both randomised benchmarking (RB) and gate-set tomography (GST). Our analysis highlights the role of sequence structure in enhancing or suppressing the sensitivity of quantum verification protocols to either slowly or rapidly varying noise, which we treat in the limiting cases of quasi-DC miscalibration and white noise power spectra. We perform experiments with a single trapped 171Yb+ ion-qubit and inject engineered noise (" separators="∝σ^ z ) to probe protocol performance. Experiments on RB validate predictions that measured fidelities over sequences are described by a gamma distribution varying between approximately Gaussian, and a broad, highly skewed distribution for rapidly and slowly varying noise, respectively. Similarly we find a strong gate set dependence of default experimental GST procedures in the presence of correlated errors, leading to significant deviations between estimated and calculated diamond distances in the presence of correlated σ^ z errors. Numerical simulations demonstrate that expansion of the gate set to include negative rotations can suppress these discrepancies and increase reported diamond distances by orders of magnitude for the same error processes. Similar effects do not occur for correlated σ^ x or σ^ y errors or depolarising noise processes, highlighting the impact of the critical interplay of selected gate set and the gauge optimisation process on the meaning of the reported diamond norm in correlated noise environments.
Manipulation strategies for massive space payloads
NASA Technical Reports Server (NTRS)
Book, Wayne J.
1989-01-01
Control for the bracing strategy is being examined. It was concluded earlier that trajectory planning must be improved to best achieve the bracing motion. Very interesting results were achieved which enable the inverse dynamics of flexible arms to be calculated for linearized motion in a more efficient manner than previously published. The desired motion of the end point beginning at t=0 and ending at t=t sub f is used to calculate the required torque at the joint. The solution is separated into a causal function that is zero for t is less than 0 and an accusal function which is zero for t is greater than t sub f. A number of alternative end point trajectories were explored in terms of the peak torque required, the amount of anticipatory action, and other issues. The single link case is the immediate subject and an experimental verification of that case is being performed. Modeling with experimental verification of closed chain dynamics continues. Modeling effort has pointed out inaccuracies that result from the choice of numerical techniques used to incorporate the closed chain constraints when modeling our experimental prototype RALF (Robotic Arm Large and Flexible). Results were compared to TREETOPS, a multi body code. The experimental verification work is suggesting new ways to make comparisons with systems having structural linearity and joint and geometric nonlinearity. The generation of inertial forces was studied with a small arm that will damp the large arm's vibration.
Development of an Ultra-Low Background Liquid Scintillation Counter for Trace Level Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erchinger, Jennifer L.; Orrell, John L.; Aalseth, Craig E.
2015-09-01
Low-level liquid scintillation counting (LSC) has been established as one of the radiation detection techniques useful in elucidating environmental processes and environmental monitoring around nuclear facilities. The Ultra-Low Background Liquid Scintillation Counter (ULB-LSC) under construction in the Shallow Underground Laboratory at Pacific Northwest National Laboratory aims to further reduce the MDAs and/or required sample processing. Through layers of passive shielding in conjunction with an active veto and 30 meters water equivalent overburden, the background reduction is expected to be 10 to 100 times below typical analytic low-background liquid scintillation systems. Simulations have shown an expected background of around 14 countsmore » per day. A novel approach to the light collection will use a coated hollow light guide cut into the inner copper shielding. Demonstration LSC measurements will show low-energy detection, spectral deconvolution, and alpha/beta discrimination capabilities, from trials with standards of tritium, strontium-90, and actinium-227, respectively. An overview of the system design and expected demonstration measurements will emphasize the potential applications of the ULB-LSC in environmental monitoring for treaty verification, reach-back sample analysis, and facility inspections.« less
Xia, Yidong; Podgorney, Robert; Huang, Hai
2016-03-17
FALCON (“Fracturing And Liquid CONvection”) is a hybrid continuous / discontinuous Galerkin finite element geothermal reservoir simulation code based on the MOOSE (“Multiphysics Object-Oriented Simulation Environment”) framework being developed and used for multiphysics applications. In the present work, a suite of verification and validation (“V&V”) test problems for FALCON was defined to meet the design requirements, and solved to the interests of enhanced geothermal system (“EGS”) design. Furthermore, the intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the FALCON solution methods. The simulation problems vary in complexity from singly mechanical ormore » thermo process, to coupled thermo-hydro-mechanical processes in geological porous media. Numerical results obtained by FALCON agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these capabilities in FALCON. Some form of solution verification has been attempted to identify sensitivities in the solution methods, where possible, and suggest best practices when using the FALCON code.« less
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...
2015-12-15
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Wang, Chuanjin; Luo, Hong
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less
Validation and Verification (V&V) of Safety-Critical Systems Operating Under Off-Nominal Conditions
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
2012-01-01
Loss of control (LOC) remains one of the largest contributors to aircraft fatal accidents worldwide. Aircraft LOC accidents are highly complex in that they can result from numerous causal and contributing factors acting alone or more often in combination. Hence, there is no single intervention strategy to prevent these accidents. Research is underway at the National Aeronautics and Space Administration (NASA) in the development of advanced onboard system technologies for preventing or recovering from loss of vehicle control and for assuring safe operation under off-nominal conditions associated with aircraft LOC accidents. The transition of these technologies into the commercial fleet will require their extensive validation and verification (V&V) and ultimate certification. The V&V of complex integrated systems poses highly significant technical challenges and is the subject of a parallel research effort at NASA. This chapter summarizes the V&V problem and presents a proposed process that could be applied to complex integrated safety-critical systems developed for preventing aircraft LOC accidents. A summary of recent research accomplishments in this effort is referenced.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
2010-01-01
Loss of control remains one of the largest contributors to aircraft fatal accidents worldwide. Aircraft loss-of-control accidents are highly complex in that they can result from numerous causal and contributing factors acting alone or (more often) in combination. Hence, there is no single intervention strategy to prevent these accidents and reducing them will require a holistic integrated intervention capability. Future onboard integrated system technologies developed for preventing loss of vehicle control accidents must be able to assure safe operation under the associated off-nominal conditions. The transition of these technologies into the commercial fleet will require their extensive validation and verification (V and V) and ultimate certification. The V and V of complex integrated systems poses major nontrivial technical challenges particularly for safety-critical operation under highly off-nominal conditions associated with aircraft loss-of-control events. This paper summarizes the V and V problem and presents a proposed process that could be applied to complex integrated safety-critical systems developed for preventing aircraft loss-of-control accidents. A summary of recent research accomplishments in this effort is also provided.
Application of additive laser technologies in the gas turbine blades design process
NASA Astrophysics Data System (ADS)
Shevchenko, I. V.; Rogalev, A. N.; Osipov, S. K.; Bychkov, N. M.; Komarov, I. I.
2017-11-01
An emergence of modern innovative technologies requires delivering new and modernization existing design and production processes. It is especially relevant for designing the high-temperature turbines of gas turbine engines, development of which is characterized by a transition to higher parameters of working medium in order to improve their efficient performance. A design technique for gas turbine blades based on predictive verification of thermal and hydraulic models of their cooling systems by testing of a blade prototype fabricated using the selective laser melting technology was presented in this article. Technique was proven at the time of development of the first stage blade cooling system for the high-pressure turbine. An experimental procedure for verification of a thermal model of the blades with convective cooling systems based on the comparison of heat-flux density obtained from the numerical simulation data and results of tests in a liquid-metal thermostat was developed. The techniques makes it possible to obtain an experimentally tested blade version and to exclude its experimental adjustment after the start of mass production.
Sensitivity of control-augmented structure obtained by a system decomposition method
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat
1988-01-01
The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.
A tuberculosis biomarker database: the key to novel TB diagnostics.
Yerlikaya, Seda; Broger, Tobias; MacLean, Emily; Pai, Madhukar; Denkinger, Claudia M
2017-03-01
New diagnostic innovations for tuberculosis (TB), including point-of-care solutions, are critical to reach the goals of the End TB Strategy. However, despite decades of research, numerous reports on new biomarker candidates, and significant investment, no well-performing, simple and rapid TB diagnostic test is yet available on the market, and the search for accurate, non-DNA biomarkers remains a priority. To help overcome this 'biomarker pipeline problem', FIND and partners are working on the development of a well-curated and user-friendly TB biomarker database. The web-based database will enable the dynamic tracking of evidence surrounding biomarker candidates in relation to target product profiles (TPPs) for needed TB diagnostics. It will be able to accommodate raw datasets and facilitate the verification of promising biomarker candidates and the identification of novel biomarker combinations. As such, the database will simplify data and knowledge sharing, empower collaboration, help in the coordination of efforts and allocation of resources, streamline the verification and validation of biomarker candidates, and ultimately lead to an accelerated translation into clinically useful tools. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Blajer, W.; Dziewiecki, K.; Kołodziejczyk, K.; Mazur, Z.
2011-05-01
Underactuated systems are featured by fewer control inputs than the degrees-of-freedom, m < n. The determination of an input control strategy that forces such a system to complete a set of m specified motion tasks is a challenging task, and the explicit solution existence is conditioned to differential flatness of the problem. The flatness-based solution denotes that all the 2 n states and m control inputs can be algebraically expressed in terms of the m specified outputs and their time derivatives up to a certain order, which is in practice attainable only for simple systems. In this contribution the problem is posed in a more practical way as a set of index-three differential-algebraic equations, and the solution is obtained numerically. The formulation is then illustrated by a two-degree-of-freedom underactuated system composed of two rotating discs connected by a torsional spring, in which the pre-specified motion of one of the discs is actuated by the torque applied to the other disc, n = 2 and m = 1. Experimental verification of the inverse simulation control methodology is reported.
30 CFR 250.913 - When must I resubmit Platform Verification Program plans?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication verification, or installation verification... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When must I resubmit Platform Verification...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jing-Jy; Flood, Paul E.; LePoire, David
In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less
Generic Verification Protocol for Verification of Online Turbidimeters
This protocol provides generic procedures for implementing a verification test for the performance of online turbidimeters. The verification tests described in this document will be conducted under the Environmental Technology Verification (ETV) Program. Verification tests will...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, Brian; Jackson, R. Brian
2017-03-08
The project, Toward a Longer Life Core: Thermal Hydraulic CFD Simulations and Experimental Investigation of Deformed Fuel Assemblies, DOE Project code DE-NE0008321, was a verification and validation project for flow and heat transfer through wire wrapped simulated liquid metal fuel assemblies that included both experiments and computational fluid dynamics simulations of those experiments. This project was a two year collaboration between AREVA, TerraPower, Argonne National Laboratory and Texas A&M University. Experiments were performed by AREVA and Texas A&M University. Numerical simulations of these experiments were performed by TerraPower and Argonne National Lab. Project management was performed by AREVA Federal Services.more » The first of a kind project resulted in the production of both local point temperature measurements and local flow mixing experiment data paired with numerical simulation benchmarking of the experiments. The project experiments included the largest wire-wrapped pin assembly Mass Index of Refraction (MIR) experiment in the world, the first known wire-wrapped assembly experiment with deformed duct geometries and the largest numerical simulations ever produced for wire-wrapped bundles.« less
Numerical reconstruction and injury biomechanism in a car-pedestrian crash accident.
Zou, Dong-Hua; Li, Zheng-Dong; Shao, Yu; Feng, Hao; Chen, Jian-Guo; Liu, Ning-Guo; Huang, Ping; Chen, Yi-Jiu
2012-12-01
To reconstruct a car-pedestrian crash accident using numerical simulation technology and explore the injury biomechanism as forensic evidence for injury identification. An integration of multi-body dynamic, finite element (FE), and classical method was applied to a car-pedestrian crash accident. The location of the collision and the details of the traffic accident were determined by vehicle trace verification and autopsy. The accident reconstruction was performed by coupling the three-dimensional car behavior from PC-CRASH with a MADYMO dummy model. The collision FE models of head and leg, developed from CT scans of human remains, were loaded with calculated dummy collision parameters. The data of the impact biomechanical responses were extracted in terms of von Mises stress, relative displacement, strain and stress fringes. The accident reconstruction results were identical with the examined ones and the biomechanism of head and leg injuries, illustrated through the FE methods, were consistent with the classical injury theories. The numerical simulation technology is proved to be effective in identifying traffic accidents and exploring of injury biomechanism.
The calculating brain: an fMRI study.
Rickard, T C; Romero, S G; Basso, G; Wharton, C; Flitman, S; Grafman, J
2000-01-01
To explore brain areas involved in basic numerical computation, functional magnetic imaging (fMRI) scanning was performed on college students during performance of three tasks; simple arithmetic, numerical magnitude judgment, and a perceptual-motor control task. For the arithmetic relative to the other tasks, results for all eight subjects revealed bilateral activation in Brodmann's area 44, in dorsolateral prefrontal cortex (areas 9 and 10), in inferior and superior parietal areas, and in lingual and fusiform gyri. Activation was stronger on the left for all subjects, but only at Brodmann's area 44 and the parietal cortices. No activation was observed in the arithmetic task in several other areas previously implicated for arithmetic, including the angular and supramarginal gyri and the basal ganglia. In fact, angular and supramarginal gyri were significantly deactivated by the verification task relative to both the magnitude judgment and control tasks for every subject. Areas activated by the magnitude task relative to the control were more variable, but in five subjects included bilateral inferior parietal cortex. These results confirm some existing hypotheses regarding the neural basis of numerical processes, invite revision of others, and suggest productive lines for future investigation.
A novel numerical framework for self-similarity in plasticity: Wedge indentation in single crystals
NASA Astrophysics Data System (ADS)
Juul, K. J.; Niordson, C. F.; Nielsen, K. L.; Kysar, J. W.
2018-03-01
A novel numerical framework for analyzing self-similar problems in plasticity is developed and demonstrated. Self-similar problems of this kind include processes such as stationary cracks, void growth, indentation etc. The proposed technique offers a simple and efficient method for handling this class of complex problems by avoiding issues related to traditional Lagrangian procedures. Moreover, the proposed technique allows for focusing the mesh in the region of interest. In the present paper, the technique is exploited to analyze the well-known wedge indentation problem of an elastic-viscoplastic single crystal. However, the framework may be readily adapted to any constitutive law of interest. The main focus herein is the development of the self-similar framework, while the indentation study serves primarily as verification of the technique by comparing to existing numerical and analytical studies. In this study, the three most common metal crystal structures will be investigated, namely the face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close packed (HCP) crystal structures, where the stress and slip rate fields around the moving contact point singularity are presented.
Thamareerat, N; Luadsong, A; Aschariyaphotha, N
2016-01-01
In this paper, we present a numerical scheme used to solve the nonlinear time fractional Navier-Stokes equations in two dimensions. We first employ the meshless local Petrov-Galerkin (MLPG) method based on a local weak formulation to form the system of discretized equations and then we will approximate the time fractional derivative interpreted in the sense of Caputo by a simple quadrature formula. The moving Kriging interpolation which possesses the Kronecker delta property is applied to construct shape functions. This research aims to extend and develop further the applicability of the truly MLPG method to the generalized incompressible Navier-Stokes equations. Two numerical examples are provided to illustrate the accuracy and efficiency of the proposed algorithm. Very good agreement between the numerically and analytically computed solutions can be observed in the verification. The present MLPG method has proved its efficiency and reliability for solving the two-dimensional time fractional Navier-Stokes equations arising in fluid dynamics as well as several other problems in science and engineering.
CFD study on the effects of boundary conditions on air flow through an air-cooled condenser
NASA Astrophysics Data System (ADS)
Sumara, Zdeněk; Šochman, Michal
2018-06-01
This study focuses on the effects of boundary conditions on effectiveness of an air-cooled condenser (ACC). Heat duty of ACC is very often calculated for ideal uniform velocity field which does not correspond to reality. Therefore, this study studies the effect of wind and different landscapes on air flow through ACC. For this study software OpenFOAM was used and the flow was simulated with the use of RANS equations. For verification of numerical setup a model of one ACC cell with dimensions of platform 1.5×1.5 [m] was used. In this experiment static pressures behind fan and air flows through a model of surface of condenser for different rpm of fan were measured. In OpenFOAM software a virtual clone of this experiment was built and different meshes, turbulent models and numerical schemes were tested. After tuning up numerical setup virtual model of real ACC system was built. Influence of wind, landscape and height of ACC on air flow through ACC has been investigated.
Numerical and Experimental Studies on Impact Loaded Concrete Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo
2006-07-01
An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert, E-mail: rhager@pppl.gov; Yoon, E.S., E-mail: yoone@rpi.edu; Ku, S., E-mail: sku@pppl.gov
2016-06-15
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. In this article, the non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. The finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable onmore » high-performance computing systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. The collision operator's good weak and strong scaling behavior are shown.« less
Logarithmic Superdiffusion in Two Dimensional Driven Lattice Gases
NASA Astrophysics Data System (ADS)
Krug, J.; Neiss, R. A.; Schadschneider, A.; Schmidt, J.
2018-03-01
The spreading of density fluctuations in two-dimensional driven diffusive systems is marginally anomalous. Mode coupling theory predicts that the diffusivity in the direction of the drive diverges with time as (ln t)^{2/3} with a prefactor depending on the macroscopic current-density relation and the diffusion tensor of the fluctuating hydrodynamic field equation. Here we present the first numerical verification of this behavior for a particular version of the two-dimensional asymmetric exclusion process. Particles jump strictly asymmetrically along one of the lattice directions and symmetrically along the other, and an anisotropy parameter p governs the ratio between the two rates. Using a novel massively parallel coupling algorithm that strongly reduces the fluctuations in the numerical estimate of the two-point correlation function, we are able to accurately determine the exponent of the logarithmic correction. In addition, the variation of the prefactor with p provides a stringent test of mode coupling theory.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
APPLICATION OF FLOW SIMULATION FOR EVALUATION OF FILLING-ABILITY OF SELF-COMPACTING CONCRETE
NASA Astrophysics Data System (ADS)
Urano, Shinji; Nemoto, Hiroshi; Sakihara, Kohei
In this paper, MPS method was applied to fluid an alysis of self-compacting concrete. MPS method is one of the particle method, and it is suitable for the simulation of moving boundary or free surface problems and large deformation problems. The constitutive equation of self-compacting concrete is assumed as bingham model. In order to investigate flow Stoppage and flow speed of self-compacting concrete, numerical analysis examples of slump flow and L-flow test were performed. In addition, to evaluate verification of compactability of self-compacting concrete, numerical analys is examples of compaction at the part of CFT diaphragm were performed. As a result, it was found that the MPS method was suitable for the simulation of compaction of self-compacting concrete, and a just appraisal was obtained by setting shear strain rate of flow-limit πc and limitation point of segregation.
The Guderley problem revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D; Kamm, James R; Bolstad, John H
2009-01-01
The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shockmore » configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.« less
Hager, Robert; Yoon, E. S.; Ku, S.; ...
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Karaton, Muhammet
2014-01-01
A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched.
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
In-line phase contrast micro-CT reconstruction for biomedical specimens.
Fu, Jian; Tan, Renbo
2014-01-01
X-ray phase contrast micro computed tomography (micro-CT) can non-destructively provide the internal structure information of soft tissues and low atomic number materials. It has become an invaluable analysis tool for biomedical specimens. Here an in-line phase contrast micro-CT reconstruction technique is reported, which consists of a projection extraction method and the conventional filter back-projection (FBP) reconstruction algorithm. The projection extraction is implemented by applying the Fourier transform to the forward projections of in-line phase contrast micro-CT. This work comprises a numerical study of the method and its experimental verification using a biomedical specimen dataset measured at an X-ray tube source micro-CT setup. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of biomedical specimens. It will be of interest for a wide range of in-line phase contrast micro-CT applications in medicine and biology.
The application of multilayer elastic beam in MEMS safe and arming system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Guozhong, E-mail: liguozhong-bit@bit.edu.cn; Shi, Gengchen; Sui, Li
In this paper, a new approach for a multilayer elastic beam to provide a driving force and driving distance for a MEMS safe and arming system is presented. In particular this is applied where a monolayer elastic beam cannot provide adequate driving force and driving distance at the same time in limited space. Compared with thicker elastic beams, the bilayer elastic beam can provide twice the driving force of a monolayer beam to guarantee the MEMS safe and arming systems work reliably without decreasing the driving distance. In this paper, the theoretical analysis, numerical simulation and experimental verification of themore » multilayer elastic beam is presented. The numerical simulation and experimental results show that the bilayer elastic provides 1.8–2 times the driving force of a monolayer, and a method that improves driving force without reducing the driving distance.« less
NASA Astrophysics Data System (ADS)
Yahiaoui, R.; Burrow, J. A.; Mekonen, S. M.; Sarangan, A.; Mathews, J.; Agha, I.; Searles, T. A.
2018-04-01
We demonstrate a classical analog of electromagnetically induced transparency (EIT) in a highly flexible planar terahertz metamaterial (MM) comprised of three-gap split-ring resonators. The keys to achieve EIT in this system are the frequency detuning and hybridization processes between two bright modes coexisting in the same unit cell as opposed to bright-dark modes. We present experimental verification of two bright modes coupling for a terahertz EIT-MM in the context of numerical results and theoretical analysis based on a coupled Lorentz oscillator model. In addition, a hybrid variation of the EIT-MM is proposed and implemented numerically to dynamically tune the EIT window by incorporating photosensitive silicon pads in the split gap region of the resonators. As a result, this hybrid MM enables the active optical control of a transition from the on state (EIT mode) to the off state (dipole mode).
Streaming and particle motion in acoustically-actuated leaky systems
NASA Astrophysics Data System (ADS)
Nama, Nitesh; Barnkob, Rune; Jun Huang, Tony; Kahler, Christian; Costanzo, Francesco
2017-11-01
The integration of acoustics with microfluidics has shown great promise for applications within biology, chemistry, and medicine. A commonly employed system to achieve this integration consists of a fluid-filled, polymer-walled microchannel that is acoustically actuated via standing surface acoustic waves. However, despite significant experimental advancements, the precise physical understanding of such systems remains a work in progress. In this work, we investigate the nature of acoustic fields that are setup inside the microchannel as well as the fundamental driving mechanism governing the fluid and particle motion in these systems. We provide an experimental benchmark using state-of-art 3D measurements of fluid and particle motion and present a Lagrangian velocity based temporal multiscale numerical framework to explain the experimental observations. Following verification and validation, we employ our numerical model to reveal the presence of a pseudo-standing acoustic wave that drives the acoustic streaming and particle motion in these systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, David E.
The process by which super-thermal ions slow down against background Coulomb potentials arises in many fields of study. In particular, this is one of the main mechanisms by which the mass and energy from the reaction products of fusion reactions is deposited back into the background. Many of these fields are characterized by length and time scales that are the same magnitude as the range and duration of the trajectory of these particles, before they thermalize into the background. This requires numerical simulation of this slowing down process through numerically integrating the velocities and energies of these particles. This papermore » first presents a simple introduction to the required plasma physics, followed by the description of the numerical integration used to integrate a beam of particles. This algorithm is unique in that it combines in an integrated manner both a second-order integration of the slowing down with the particle beam dispersion. These two processes are typically computed in isolation from each other. A simple test problem of a beam of alpha particles slowing down against an inert background of deuterium and tritium with varying properties of both the beam and the background illustrate the utility of the algorithm. This is followed by conclusions and appendices. The appendices define the notation, units, and several useful identities.« less
Verification Games: Crowd-Sourced Formal Verification
2016-03-01
VERIFICATION GAMES : CROWD-SOURCED FORMAL VERIFICATION UNIVERSITY OF WASHINGTON MARCH 2016 FINAL TECHNICAL REPORT...DATES COVERED (From - To) JUN 2012 – SEP 2015 4. TITLE AND SUBTITLE VERIFICATION GAMES : CROWD-SOURCED FORMAL VERIFICATION 5a. CONTRACT NUMBER FA8750...clarification memorandum dated 16 Jan 09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Over the more than three years of the project Verification Games : Crowd-sourced
Towards high fidelity numerical wave tanks for modelling coastal and ocean engineering processes
NASA Astrophysics Data System (ADS)
Cozzuto, G.; Dimakopoulos, A.; de Lataillade, T.; Kees, C. E.
2017-12-01
With the increasing availability of computational resources, the engineering and research community is gradually moving towards using high fidelity Comutational Fluid Mechanics (CFD) models to perform numerical tests for improving the understanding of physical processes pertaining to wave propapagation and interaction with the coastal environment and morphology, either physical or man-made. It is therefore important to be able to reproduce in these models the conditions that drive these processes. So far, in CFD models the norm is to use regular (linear or nonlinear) waves for performing numerical tests, however, only random waves exist in nature. In this work, we will initially present the verification and validation of numerical wave tanks based on Proteus, an open-soruce computational toolkit based on finite element analysis, with respect to the generation, propagation and absorption of random sea states comprising of long non-repeating wave sequences. Statistical and spectral processing of results demonstrate that the methodologies employed (including relaxation zone methods and moving wave paddles) are capable of producing results of similar quality to the wave tanks used in laboratories (Figure 1). Subsequently cases studies of modelling complex process relevant to coastal defences and floating structures such as sliding and overturning of composite breakwaters, heave and roll response of floating caissons are presented. Figure 1: Wave spectra in the numerical wave tank (coloured symbols), compared against the JONSWAP distribution
Saur, Sigrun; Frengen, Jomar
2008-07-01
Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, Sigrun; Frengen, Jomar; Department of Oncology and Radiotherapy, St. Olavs University Hospital, N-7006 Trondheim
Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scansmore » of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16x16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Y. S.; Joo, H. G.; Yoon, J. I.
The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)
Verification of RRA and CMC in OpenSim
NASA Astrophysics Data System (ADS)
Ieshiro, Yuma; Itoh, Toshiaki
2013-10-01
OpenSim is the free software that can handle various analysis and simulation of skeletal muscle dynamics with PC. This study treated RRA and CMC tools in OpenSim. It is remarkable that we can simulate human motion with respect to nerve signal of muscles using these tools. However, these tools seem to still in developmental stages. In order to verify applicability of these tools, we analyze bending and stretching motion data which are obtained from motion capture device using these tools. In this study, we checked the consistency between real muscle behavior and numerical results from these tools.
Partially filled electrodes for digital microfluidic devices
NASA Astrophysics Data System (ADS)
Pyne, D. G.; Salman, W. M.; Abdelgawad, M.; Sun, Y.
2013-07-01
As digital microfluidics technology evolves, the need for integrating additional elements (e.g., sensing/detection and heating elements) on the electrode increases. Consequently, electrode area for droplet actuation is reduced to create space for accommodating these additional elements, which undesirably affects force generation. Electrodes cannot simply be scaled larger to compensate for this loss of force, as this would also increase droplet volume and thereby compromise the advantages thought in miniaturization. Here, we present a study evaluating, numerically with preliminary experimental verification, different partially filled electrode designs and suggesting designs that combine high actuation forces with a large reduction in electrode area.
Simulation of a manual electric-arc welding in a working gas pipeline. 1. Formulation of the problem
NASA Astrophysics Data System (ADS)
Baikov, V. I.; Gishkelyuk, I. A.; Rus', A. M.; Sidorovich, T. V.; Tonkonogov, B. A.
2010-11-01
Problems of mathematical simulation of the temperature stresses arising in the wall of a pipe of a cross-country gas pipeline in the process of electric-arc welding of defects in it have been considered. Mathematical models of formation of temperatures, deformations, and stresses in a gas pipe subjected to phase transformations have been developed. These models were numerically realized in the form of algorithms representing a part of an application-program package. Results of verification of the computational complex and calculation results obtained with it are presented.
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Meteorological and Environmental Inputs to Aviation Systems
NASA Technical Reports Server (NTRS)
Camp, Dennis W. (Editor); Frost, Walter (Editor)
1988-01-01
Reports on aviation meteorology, most of them informal, are presented by representatives of the National Weather Service, the Bracknell (England) Meteorological Office, the NOAA Wave Propagation Lab., the Fleet Numerical Oceanography Center, and the Aircraft Owners and Pilots Association. Additional presentations are included on aircraft/lidar turbulence comparison, lightning detection and locating systems, objective detection and forecasting of clear air turbulence, comparative verification between the Generalized Exponential Markov (GEM) Model and official aviation terminal forecasts, the evaluation of the Prototype Regional Observation and Forecast System (PROFS) mesoscale weather products, and the FAA/MIT Lincoln Lab. Doppler Weather Radar Program.
Existence and Stability of Viscoelastic Shock Profiles
NASA Astrophysics Data System (ADS)
Barker, Blake; Lewicka, Marta; Zumbrun, Kevin
2011-05-01
We investigate existence and stability of viscoelastic shock profiles for a class of planar models including the incompressible shear case studied by Antman and Malek-Madani. We establish that the resulting equations fall into the class of symmetrizable hyperbolic-parabolic systems, hence spectral stability implies linearized and nonlinear stability with sharp rates of decay. The new contributions are treatment of the compressible case, formulation of a rigorous nonlinear stability theory, including verification of stability of small-amplitude Lax shocks, and the systematic incorporation in our investigations of numerical Evans function computations determining stability of large-amplitude and nonclassical type shock profiles.
Quantification of uncertainties for application in detonation simulation
NASA Astrophysics Data System (ADS)
Zheng, Miao; Ma, Zhibo
2016-06-01
Numerical simulation has become an important means in designing detonation systems, and the quantification of its uncertainty is also necessary to reliability certification. As to quantifying the uncertainty, it is the most important to analyze how the uncertainties occur and develop, and how the simulations develop from benchmark models to new models. Based on the practical needs of engineering and the technology of verification & validation, a framework of QU(quantification of uncertainty) is brought forward in the case that simulation is used on detonation system for scientific prediction. An example is offered to describe the general idea of quantification of simulation uncertainties.
NASA Technical Reports Server (NTRS)
Kalluri, Sreeramesh
2013-01-01
Structural materials used in engineering applications routinely subjected to repetitive mechanical loads in multiple directions under non-isothermal conditions. Over past few decades, several multiaxial fatigue life estimation models (stress- and strain-based) developed for isothermal conditions. Historically, numerous fatigue life prediction models also developed for thermomechanical fatigue (TMF) life prediction, predominantly for uniaxial mechanical loading conditions. Realistic structural components encounter multiaxial loads and non-isothermal loading conditions, which increase potential for interaction of damage modes. A need exists for mechanical testing and development verification of life prediction models under such conditions.
NASA Technical Reports Server (NTRS)
Lee, Henry C.; Klopfer, Goetz H.; Onufer, Jeff T.
2011-01-01
Investigation of the non-uniform flow angularity effects on the Ares I DAC-1 in the Langley Unitary Plan Wind Tunnel are explored through simulations by OVERFLOW. Verification of the wind tunnel results are needed to ensure that the standard wind tunnel calibration procedures for large models are valid. The expectation is that the systematic error can be quantified, and thus be used to correct the wind tunnel data. The corrected wind tunnel data can then be used to quantify the CFD uncertainties.
Signal detection using support vector machines in the presence of ultrasonic speckle
NASA Astrophysics Data System (ADS)
Kotropoulos, Constantine L.; Pitas, Ioannis
2002-04-01
Support Vector Machines are a general algorithm based on guaranteed risk bounds of statistical learning theory. They have found numerous applications, such as in classification of brain PET images, optical character recognition, object detection, face verification, text categorization and so on. In this paper we propose the use of support vector machines to segment lesions in ultrasound images and we assess thoroughly their lesion detection ability. We demonstrate that trained support vector machines with a Radial Basis Function kernel segment satisfactorily (unseen) ultrasound B-mode images as well as clinical ultrasonic images.
NASA Technical Reports Server (NTRS)
Price, J. M.; Steeve, B. E.; Swanson, G. R.
1999-01-01
The analytical prediction of stress, strain, and fatigue life at locations experiencing local plasticity is full of uncertainties. Much of this uncertainty arises from the material models and their use in the numerical techniques used to solve plasticity problems. Experimental measurements of actual plastic strains would allow the validity of these models and solutions to be tested. This memorandum describes how experimental plastic residual strain measurements were used to verify the results of a thermally induced plastic fatigue failure analysis of a space shuttle main engine fuel pump component.
A Radiation-Triggered Surveillance System for UF6 Cylinder Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Michael M.; Myjak, Mitchell J.
This report provides background information and representative scenarios for testing a prototype radiation-triggered surveillance system at an operating facility that handles uranium hexafluoride (UF 6) cylinders. The safeguards objective is to trigger cameras using radiation, or radiation and motion, rather than motion alone, to reduce significantly the number of image files generated by a motion-triggered system. The authors recommend the use of radiation-triggered surveillance at all facilities where cylinder paths are heavily traversed by personnel. The International Atomic Energy Agency (IAEA) has begun using surveillance cameras in the feed and withdrawal areas of gas centrifuge enrichment plants (GCEPs). The camerasmore » generate imagery using elapsed time or motion, but this creates problems in areas occupied 24/7 by personnel. Either motion-or-interval-based triggering generates thousands of review files over the course of a month. Since inspectors must review the files to verify operator material-flow-declarations, a plethora of files significantly extends the review process. The primary advantage of radiation-triggered surveillance is the opportunity to obtain full-time cylinder throughput verification versus what presently amounts to part-time verification. Cost savings should be substantial, as the IAEA presently uses frequent unannounced inspections to verify cylinder-throughput declarations. The use of radiation-triggered surveillance allows the IAEA to implement less frequent unannounced inspections for the purpose of flow verification, but its principal advantage is significantly shorter and more effective inspector video reviews.« less
Near-real-time Estimation and Forecast of Total Precipitable Water in Europe
NASA Astrophysics Data System (ADS)
Bartholy, J.; Kern, A.; Barcza, Z.; Pongracz, R.; Ihasz, I.; Kovacs, R.; Ferencz, C.
2013-12-01
Information about the amount and spatial distribution of atmospheric water vapor (or total precipitable water) is essential for understanding weather and the environment including the greenhouse effect, the climate system with its feedbacks and the hydrological cycle. Numerical weather prediction (NWP) models need accurate estimations of water vapor content to provide realistic forecasts including representation of clouds and precipitation. In the present study we introduce our research activity for the estimation and forecast of atmospheric water vapor in Central Europe using both observations and models. The Eötvös Loránd University (Hungary) operates a polar orbiting satellite receiving station in Budapest since 2002. This station receives Earth observation data from polar orbiting satellites including MODerate resolution Imaging Spectroradiometer (MODIS) Direct Broadcast (DB) data stream from satellites Terra and Aqua. The received DB MODIS data are automatically processed using freely distributed software packages. Using the IMAPP Level2 software total precipitable water is calculated operationally using two different methods. Quality of the TPW estimations is a crucial question for further application of the results, thus validation of the remotely sensed total precipitable water fields is presented using radiosonde data. In a current research project in Hungary we aim to compare different estimations of atmospheric water vapor content. Within the frame of the project we use a NWP model (DBCRAS; Direct Broadcast CIMSS Regional Assimilation System numerical weather prediction software developed by the University of Wisconsin, Madison) to forecast TPW. DBCRAS uses near real time Level2 products from the MODIS data processing chain. From the wide range of the derived Level2 products the MODIS TPW parameter found within the so-called mod07 results (Atmospheric Profiles Product) and the cloud top pressure and cloud effective emissivity parameters from the so-called mod06 results (Cloud Product) are assimilated twice a day (at 00 and 12 UTC) by DBCRAS. DBCRAS creates 72 hours long weather forecasts with 48 km horizontal resolution. DBCRAS is operational at the University since 2009 which means that by now sufficient data is available for the verification of the model. In the present study verification results for the DBCRAS total precipitable water forecasts are presented based on analysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF). Numerical indices are calculated to quantify the performance of DBCRAS. During a limited time period DBCRAS was also ran without assimilating MODIS products which means that there is possibility to quantify the effect of assimilating MODIS physical products on the quality of the forecasts. For this limited time period verification indices are compared to decide whether MODIS data improves forecast quality or not.
Numerical Simulation Applications in the Design of EGS Collab Experiment 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; White, Mark D.; Fu, Pengcheng
The United States Department of Energy, Geothermal Technologies Office (GTO) is funding a collaborative investigation of enhanced geothermal systems (EGS) processes at the meso-scale. This study, referred to as the EGS Collab project, is a unique opportunity for scientists and engineers to investigate the creation of fracture networks and circulation of fluids across those networks under in-situ stress conditions. The EGS Collab project is envisioned to comprise three experiments and the site for the first experiment is on the 4850 Level (4,850 feet below ground surface) in phyllite of the Precambrian Poorman formation, at the Sanford Underground Research Facility, locatedmore » at the former Homestake Gold Mine, in Lead, South Dakota. Principal objectives of the project are to develop a number of intermediate-scale field sites and to conduct well-controlled in situ experiments focused on rock fracture behavior and permeability enhancement. Data generated during these experiments will be compared against predictions of a suite of computer codes specifically designed to solve problems involving coupled thermal, hydrological, geomechanical, and geochemical processes. Comparisons between experimental and numerical simulation results will provide code developers with direction for improvements and verification of process models, build confidence in the suite of available numerical tools, and ultimately identify critical future development needs for the geothermal modeling community. Moreover, conducting thorough comparisons of models, modelling approaches, measurement approaches and measured data, via the EGS Collab project, will serve to identify techniques that are most likely to succeed at the Frontier Observatory for Research in Geothermal Energy (FORGE), the GTO's flagship EGS research effort. As noted, outcomes from the EGS Collab project experiments will serve as benchmarks for computer code verification, but numerical simulation additionally plays an essential role in designing these meso-scale experiments. This paper describes specific numerical simulations supporting the design of Experiment 1, a field test involving hydraulic stimulation of two fractures from notched sections of the injection borehole and fluid circulation between sub-horizontal injection and production boreholes in each fracture individually and collectively, including the circulation of chilled water. Whereas the mine drift allows for accurate and close placement of monitoring instrumentation to the developed fractures, active ventilation in the drift cooled the rock mass within the experimental volume. Numerical simulations were executed to predict seismic events and magnitudes during stimulation, initial fracture orientations for smooth horizontal wellbores, pressure requirements for fracture initiation from notched wellbores, fracture propagation during stimulation between the injection and production boreholes, tracer travel times between the injection and production boreholes, produced fluid temperatures with chilled water injections, pressure limits on fluid circulation to avoid fracture growth, temperature environment surrounding the 4850 Level drift, and fracture propagation within a stress field altered by drift excavation, ventilation cooling, and dewatering.« less
The SCEC/USGS dynamic earthquake rupture code verification exercise
Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.
2009-01-01
Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.
NASA Astrophysics Data System (ADS)
Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet
2018-06-01
Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.
NASA Astrophysics Data System (ADS)
Şahan, Mehmet Fatih
2017-11-01
In this paper, the viscoelastic damped response of cross-ply laminated shallow spherical shells is investigated numerically in a transformed Laplace space. In the proposed approach, the governing differential equations of cross-ply laminated shallow spherical shell are derived using the dynamic version of the principle of virtual displacements. Following this, the Laplace transform is employed in the transient analysis of viscoelastic laminated shell problem. Also, damping can be incorporated with ease in the transformed domain. The transformed time-independent equations in spatial coordinate are solved numerically by Gauss elimination. Numerical inverse transformation of the results into the real domain are operated by the modified Durbin transform method. Verification of the presented method is carried out by comparing the results with those obtained by the Newmark method and ANSYS finite element software. Furthermore, the developed solution approach is applied to problems with several impulsive loads. The novelty of the present study lies in the fact that a combination of the Navier method and Laplace transform is employed in the analysis of cross-ply laminated shallow spherical viscoelastic shells. The numerical sample results have proved that the presented method constitutes a highly accurate and efficient solution, which can be easily applied to the laminated viscoelastic shell problems.
Neutron Source Facility Training Simulator Based on EPICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Young Soo; Wei, Thomas Y.; Vilim, Richard B.
A plant operator training simulator is developed for training the plant operators as well as for design verification of plant control system (PCS) and plant protection system (PPS) for the Kharkov Institute of Physics and Technology Neutron Source Facility. The simulator provides the operator interface for the whole plant including the sub-critical assembly coolant loop, target coolant loop, secondary coolant loop, and other facility systems. The operator interface is implemented based on Experimental Physics and Industrial Control System (EPICS), which is a comprehensive software development platform for distributed control systems. Since its development at Argonne National Laboratory, it has beenmore » widely adopted in the experimental physics community, e.g. for control of accelerator facilities. This work is the first implementation for a nuclear facility. The main parts of the operator interface are the plant control panel and plant protection panel. The development involved implementation of process variable database, sequence logic, and graphical user interface (GUI) for the PCS and PPS utilizing EPICS and related software tools, e.g. sequencer for sequence logic, and control system studio (CSS-BOY) for graphical use interface. For functional verification of the PCS and PPS, a plant model is interfaced, which is a physics-based model of the facility coolant loops implemented as a numerical computer code. The training simulator is tested and demonstrated its effectiveness in various plant operation sequences, e.g. start-up, shut-down, maintenance, and refueling. It was also tested for verification of the plant protection system under various trip conditions.« less
VEG-01: Veggie Hardware Verification Testing
NASA Technical Reports Server (NTRS)
Massa, Gioia; Newsham, Gary; Hummerick, Mary; Morrow, Robert; Wheeler, Raymond
2013-01-01
The Veggie plant/vegetable production system is scheduled to fly on ISS at the end of2013. Since much of the technology associated with Veggie has not been previously tested in microgravity, a hardware validation flight was initiated. This test will allow data to be collected about Veggie hardware functionality on ISS, allow crew interactions to be vetted for future improvements, validate the ability of the hardware to grow and sustain plants, and collect data that will be helpful to future Veggie investigators as they develop their payloads. Additionally, food safety data on the lettuce plants grown will be collected to help support the development of a pathway for the crew to safely consume produce grown on orbit. Significant background research has been performed on the Veggie plant growth system, with early tests focusing on the development of the rooting pillow concept, and the selection of fertilizer, rooting medium and plant species. More recent testing has been conducted to integrate the pillow concept into the Veggie hardware and to ensure that adequate water is provided throughout the growth cycle. Seed sanitation protocols have been established for flight, and hardware sanitation between experiments has been studied. Methods for shipping and storage of rooting pillows and the development of crew procedures and crew training videos for plant activities on-orbit have been established. Science verification testing was conducted and lettuce plants were successfully grown in prototype Veggie hardware, microbial samples were taken, plant were harvested, frozen, stored and later analyzed for microbial growth, nutrients, and A TP levels. An additional verification test, prior to the final payload verification testing, is desired to demonstrate similar growth in the flight hardware and also to test a second set of pillows containing zinnia seeds. Issues with root mat water supply are being resolved, with final testing and flight scheduled for later in 2013.
Scenarios for exercising technical approaches to verified nuclear reductions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doyle, James
2010-01-01
Presidents Obama and Medvedev in April 2009 committed to a continuing process of step-by-step nuclear arms reductions beyond the new START treaty that was signed April 8, 2010 and to the eventual goal of a world free of nuclear weapons. In addition, the US Nuclear Posture review released April 6, 2010 commits the US to initiate a comprehensive national research and development program to support continued progress toward a world free of nuclear weapons, including expanded work on verification technologies and the development of transparency measures. It is impossible to predict the specific directions that US-RU nuclear arms reductions willmore » take over the 5-10 years. Additional bilateral treaties could be reached requiring effective verification as indicated by statements made by the Obama administration. There could also be transparency agreements or other initiatives (unilateral, bilateral or multilateral) that require monitoring with a standard of verification lower than formal arms control, but still needing to establish confidence to domestic, bilateral and multilateral audiences that declared actions are implemented. The US Nuclear Posture Review and other statements give some indication of the kinds of actions and declarations that may need to be confirmed in a bilateral or multilateral setting. Several new elements of the nuclear arsenals could be directly limited. For example, it is likely that both strategic and nonstrategic nuclear warheads (deployed and in storage), warhead components, and aggregate stocks of such items could be accountable under a future treaty or transparency agreement. In addition, new initiatives or agreements may require the verified dismantlement of a certain number of nuclear warheads over a specified time period. Eventually procedures for confirming the elimination of nuclear warheads, components and fissile materials from military stocks will need to be established. This paper is intended to provide useful background information for establishing a conceptual approach to a five-year technical program plan for research and development of nuclear arms reductions verification and transparency technologies and procedures.« less
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
A sharp image or a sharp knife: norms for the modality-exclusivity of 774 concept-property items.
van Dantzig, Saskia; Cowell, Rosemary A; Zeelenberg, René; Pecher, Diane
2011-03-01
According to recent embodied cognition theories, mental concepts are represented by modality-specific sensory-motor systems. Much of the evidence for modality-specificity in conceptual processing comes from the property-verification task. When applying this and other tasks, it is important to select items based on their modality-exclusivity. We collected modality ratings for a set of 387 properties, each of which was paired with two different concepts, yielding a total of 774 concept-property items. For each item, participants rated the degree to which the property could be experienced through five perceptual modalities (vision, audition, touch, smell, and taste). Based on these ratings, we computed a measure of modality exclusivity, the degree to which a property is perceived exclusively through one sensory modality. In this paper, we briefly sketch the theoretical background of conceptual knowledge, discuss the use of the property-verification task in cognitive research, provide our norms and statistics, and validate the norms in a memory experiment. We conclude that our norms are important for researchers studying modality-specific effects in conceptual processing.
Cross-verification of the GENE and XGC codes in preparation for their coupling
NASA Astrophysics Data System (ADS)
Jenko, Frank; Merlo, Gabriele; Bhattacharjee, Amitava; Chang, Cs; Dominski, Julien; Ku, Seunghoe; Parker, Scott; Lanti, Emmanuel
2017-10-01
A high-fidelity Whole Device Model (WDM) of a magnetically confined plasma is a crucial tool for planning and optimizing the design of future fusion reactors, including ITER. Aiming at building such a tool, in the framework of the Exascale Computing Project (ECP) the two existing gyrokinetic codes GENE (Eulerian delta-f) and XGC (PIC full-f) will be coupled, thus enabling to carry out first principle kinetic WDM simulations. In preparation for this ultimate goal, a benchmark between the two codes is carried out looking at ITG modes in the adiabatic electron limit. This verification exercise is also joined by the global Lagrangian PIC code ORB5. Linear and nonlinear comparisons have been carried out, neglecting for simplicity collisions and sources. A very good agreement is recovered on frequency, growth rate and mode structure of linear modes. A similarly excellent agreement is also observed comparing the evolution of the heat flux and of the background temperature profile during nonlinear simulations. Work supported by the US DOE under the Exascale Computing Project (17-SC-20-SC).
Meteor localization via statistical analysis of spatially temporal fluctuations in image sequences
NASA Astrophysics Data System (ADS)
Kukal, Jaromír.; Klimt, Martin; Šihlík, Jan; Fliegel, Karel
2015-09-01
Meteor detection is one of the most important procedures in astronomical imaging. Meteor path in Earth's atmosphere is traditionally reconstructed from double station video observation system generating 2D image sequences. However, the atmospheric turbulence and other factors cause spatially-temporal fluctuations of image background, which makes the localization of meteor path more difficult. Our approach is based on nonlinear preprocessing of image intensity using Box-Cox and logarithmic transform as its particular case. The transformed image sequences are then differentiated along discrete coordinates to obtain statistical description of sky background fluctuations, which can be modeled by multivariate normal distribution. After verification and hypothesis testing, we use the statistical model for outlier detection. Meanwhile the isolated outlier points are ignored, the compact cluster of outliers indicates the presence of meteoroids after ignition.
NASA Astrophysics Data System (ADS)
Ohno, M.; Kawano, T.; Edahiro, I.; Shirakawa, H.; Ohashi, N.; Okada, C.; Habata, S.; Katsuta, J.; Tanaka, Y.; Takahashi, H.; Mizuno, T.; Fukazawa, Y.; Murakami, H.; Kobayashi, S.; Miyake, K.; Ono, K.; Kato, Y.; Furuta, Y.; Murota, Y.; Okuda, K.; Wada, Y.; Nakazawa, K.; Mimura, T.; Kataoka, J.; Ichinohe, Y.; Uchida, Y.; Katsuragawa, M.; Yoneda, H.; Sato, G.; Sato, R.; Kawaharada, M.; Harayama, A.; Odaka, H.; Hayashi, K.; Ohta, M.; Watanabe, S.; Kokubun, M.; Takahashi, T.; Takeda, S.; Kinoshita, M.; Yamaoka, K.; Tajima, H.; Yatsu, Y.; Uchiyama, H.; Saito, S.; Yuasa, T.; Makishima, K.; ASTRO-H HXI/SGD Team
2016-09-01
The hard X-ray Imager and Soft Gamma-ray Detector onboard ASTRO-H demonstrate high sensitivity to hard X-ray (5-80 keV) and soft gamma-rays (60-600 keV), respectively. To reduce the background, both instruments are actively shielded by large, thick Bismuth Germanate scintillators. We have developed the signal processing system of the avalanche photodiode in the BGO active shields and have demonstrated its effectiveness after assembly in the flight model of the HXI/SGD sensor and after integration into the satellite. The energy threshold achieved is about 150 keV and anti-coincidence efficiency for cosmic-ray events is almost 100%. Installed in the BGO active shield, the developed signal processing system successfully reduces the room background level of the main detector.
The MiniCLEAN Dark Matter Experiment
NASA Astrophysics Data System (ADS)
Schnee, Richard; Deap/Clean Collaboration
2011-10-01
The MiniCLEAN dark matter experiment exploits a single-phase liquid argon (LAr) detector, instrumented with photomultiplier tubes submerged in the cryogen with nearly 4 π coverage of a 500 kg target (150 kg fiducial) mass. The high light yield and large difference in singlet/triplet scintillation time-profiles in LAr provide effective defense against radioactive backgrounds through pulse-shape discrimination and event position reconstruction. The detector is also designed for a liquid neon target which, in the event of a positive signal in LAr, will enable an independent verification of backgrounds and provide a unique test of the expected A2 dependence of the WIMP interaction rate. The conceptually simple design can be scaled to target masses in excess of 10 tons in a relatively straightforward and economic manner. The experimental technique and current status of MiniCLEAN will be summarized.
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
"Gaining Power through Education": Experiences of Honduran Students from High Poverty Backgrounds
ERIC Educational Resources Information Center
Mather, Peter C.; Zempter, Christy; Ngumbi, Elizabeth; Nakama, Yuki; Manley, David; Cox, Haley
2017-01-01
This is a study of students from high-poverty backgrounds attending universities in Honduras. Based on a series of individual and focus group interviews, the researchers found students from high-poverty backgrounds face numerous practical challenges in persisting in higher education. Despite these challenges, participants succeeded due to a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew
2016-04-13
The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2011-01-01
The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less
Validation of the FEA of a deep drawing process with additional force transmission
NASA Astrophysics Data System (ADS)
Behrens, B.-A.; Bouguecha, A.; Bonk, C.; Grbic, N.; Vucetic, M.
2017-10-01
In order to meet requirements by automotive industry like decreasing the CO2 emissions, which reflects in reducing vehicles mass in the car body, the chassis and the powertrain, the continuous innovation and further development of existing production processes are required. In sheet metal forming processes the process limits and components characteristics are defined through the process specific loads. While exceeding the load limits, a failure in the material occurs, which can be avoided by additional force transmission activated in the deep drawing process before the process limit is achieved. This contribution deals with experimental investigations of a forming process with additional force transmission regarding the extension of the process limits. Based on FEA a tool system is designed and developed by IFUM. For this purpose, the steel material HCT600 is analyzed numerically. Within the experimental investigations, the deep drawing processes, with and without the additional force transmission are carried out. Here, a comparison of the produced rectangle cups is done. Subsequently, the identical deep drawing processes are investigated numerically. Thereby, the values of the punch reaction force and displacement are estimated and compared with experimental results. Thus, the validation of material model is successfully carried out on process scale. For further quantitative verification of the FEA results the experimental determined geometry of the rectangular cup is measured optically with ATOS system of the company GOM mbH and digitally compared with external software Geomagic®QualifyTM. The goal of this paper is the verification of the transferability of the FEA model for a conventional deep drawing process to a deep drawing process with additional force transmission with a counter punch.
A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.
2003-01-01
This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence or absence 93% of the time during the two- month evaluation period from July and August 2000. Nearly all failures in CEM were the result of complex precipitation features (observed or forecast) that contaminated the wind field, resulting in a false identification of a sea-breeze transition. A qualitative comparison between the CEM timing errors and the subjectively determined observed and forecast transition times indicate that the algorithm performed very well overall. Most discrepancies between the CEM results and the subjective analysis were again caused by observed or forecast areas of precipitation that led to complex wind patterns. The CEM also failed on a day when the observed sea- breeze transition affected only a very small portion of the verification domain. Based on the results of CEM, the RAMS tended to predict the onset and movement of the sea-breeze transition too early and/or quickly. The domain-wide timing biases provided by CEM indicated an early bias on 30 out of 37 days when both an observed and forecast sea breeze occurred over the same portions of the analysis domain. These results are consistent with previous subjective verifications of the RAMS sea breeze predictions. A comparison of the mean post-sea breeze winds indicate that RAMS has a positive wind-speed bias for .all days, which is also consistent with the early bias in the sea-breeze transition time since the higher wind speeds resulted in a faster inland penetration of the sea breeze compared to reality.
GHG MITIGATION TECHNOLOGY PERFORMANCE EVALUATIONS UNDERWAY AT THE GHG TECHNOLOGY VERIFICATION CENTER
The paper outlines the verification approach and activities of the Greenhouse Gas (GHG) Technology Verification Center, one of 12 independent verification entities operating under the U.S. EPA-sponsored Environmental Technology Verification (ETV) program. (NOTE: The ETV program...
Empirical verification of evolutionary theories of aging.
Kyryakov, Pavlo; Gomez-Perez, Alejandra; Glebov, Anastasia; Asbah, Nimara; Bruno, Luigi; Meunier, Carolynne; Iouk, Tatiana; Titorenko, Vladimir I
2016-10-25
We recently selected 3 long-lived mutant strains of Saccharomyces cerevisiae by a lasting exposure to exogenous lithocholic acid. Each mutant strain can maintain the extended chronological lifespan after numerous passages in medium without lithocholic acid. In this study, we used these long-lived yeast mutants for empirical verification of evolutionary theories of aging. We provide evidence that the dominant polygenic trait extending longevity of each of these mutants 1) does not affect such key features of early-life fitness as the exponential growth rate, efficacy of post-exponential growth and fecundity; and 2) enhances such features of early-life fitness as susceptibility to chronic exogenous stresses, and the resistance to apoptotic and liponecrotic forms of programmed cell death. These findings validate evolutionary theories of programmed aging. We also demonstrate that under laboratory conditions that imitate the process of natural selection within an ecosystem, each of these long-lived mutant strains is forced out of the ecosystem by the parental wild-type strain exhibiting shorter lifespan. We therefore concluded that yeast cells have evolved some mechanisms for limiting their lifespan upon reaching a certain chronological age. These mechanisms drive the evolution of yeast longevity towards maintaining a finite yeast chronological lifespan within ecosystems.
NASA Astrophysics Data System (ADS)
Czirjak, Daniel
2017-04-01
Remote sensing platforms have consistently demonstrated the ability to detect, and in some cases identify, specific targets of interest, and photovoltaic solar panels are shown to have a unique spectral signature that is consistent across multiple manufacturers and construction methods. Solar panels are proven to be detectable in hyperspectral imagery using common statistical target detection methods such as the adaptive cosine estimator, and false alarms can be mitigated through the use of a spectral verification process that eliminates pixels that do not have the key spectral features of photovoltaic solar panel reflectance spectrum. The normalized solar panel index is described and is a key component in the false-alarm mitigation process. After spectral verification, these solar panel arrays are confirmed on openly available literal imagery and can be measured using numerous open-source algorithms and tools. The measurements allow for the assessment of overall solar power generation capacity using an equation that accounts for solar insolation, the area of solar panels, and the efficiency of the solar panels conversion of solar energy to power. Using a known location with readily available information, the methods outlined in this paper estimate the power generation capabilities within 6% of the rated power.
Software Testing and Verification in Climate Model Development
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Rood, RIchard B.
2011-01-01
Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.
Beam Loss Monitoring for LHC Machine Protection
NASA Astrophysics Data System (ADS)
Holzer, Eva Barbara; Dehning, Bernd; Effnger, Ewald; Emery, Jonathan; Grishin, Viatcheslav; Hajdu, Csaba; Jackson, Stephen; Kurfuerst, Christoph; Marsili, Aurelien; Misiowiec, Marek; Nagel, Markus; Busto, Eduardo Nebot Del; Nordt, Annika; Roderick, Chris; Sapinski, Mariusz; Zamantzas, Christos
The energy stored in the nominal LHC beams is two times 362 MJ, 100 times the energy of the Tevatron. As little as 1 mJ/cm3 deposited energy quenches a magnet at 7 TeV and 1 J/cm3 causes magnet damage. The beam dumps are the only places to safely dispose of this beam. One of the key systems for machine protection is the beam loss monitoring (BLM) system. About 3600 ionization chambers are installed at likely or critical loss locations around the LHC ring. The losses are integrated in 12 time intervals ranging from 40 μs to 84 s and compared to threshold values defined in 32 energy ranges. A beam abort is requested when potentially dangerous losses are detected or when any of the numerous internal system validation tests fails. In addition, loss data are used for machine set-up and operational verifications. The collimation system for example uses the loss data for set-up and regular performance verification. Commissioning and operational experience of the BLM are presented: The machine protection functionality of the BLM system has been fully reliable; the LHC availability has not been compromised by false beam aborts.
18 CFR 281.213 - Data Verification Committee.
Code of Federal Regulations, 2011 CFR
2011-04-01
... preparation. (e) The Data Verification Committee shall prepare a report concerning the proposed index of... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Data Verification....213 Data Verification Committee. (a) Each interstate pipeline shall establish a Data Verification...
18 CFR 281.213 - Data Verification Committee.
Code of Federal Regulations, 2010 CFR
2010-04-01
... preparation. (e) The Data Verification Committee shall prepare a report concerning the proposed index of... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Data Verification....213 Data Verification Committee. (a) Each interstate pipeline shall establish a Data Verification...
The politics of verification and the control of nuclear tests, 1945-1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, N.W.
1990-01-01
This dissertation addresses two questions: (1) why has agreement been reached on verification regimes to support some arms control accords but not others; and (2) what determines the extent to which verification arrangements promote stable cooperation. This study develops an alternative framework for analysis by examining the politics of verification at two levels. The logical politics of verification are shaped by the structure of the problem of evaluating cooperation under semi-anarchical conditions. The practical politics of verification are driven by players' attempts to use verification arguments to promote their desired security outcome. The historical material shows that agreements on verificationmore » regimes are reached when key domestic and international players desire an arms control accord and believe that workable verification will not have intolerable costs. Clearer understanding of how verification is itself a political problem, and how players manipulate it to promote other goals is necessary if the politics of verification are to support rather than undermine the development of stable cooperation.« less
Numerical analysis of the Anderson localization
NASA Astrophysics Data System (ADS)
Markoš, P.
2006-10-01
The aim of this paper is to demonstrate, by simple numerical simulations, the main transport properties of disordered electron systems. These systems undergo the metal insulator transition when either Fermi energy crosses the mobility edge or the strength of the disorder increases over critical value. We study how disorder affects the energy spectrum and spatial distribution of electronic eigenstates in the diffusive and insulating regime, as well as in the critical region of the metal-insulator transition. Then, we introduce the transfer matrix and conductance, and we discuss how the quantum character of the electron propagation influences the transport properties of disordered samples. In the weakly disordered systems, the weak localization and anti-localization as well as the universal conductance fluctuation are numerically simulated and discussed. The localization in the one dimensional system is described and interpreted as a purely quantum effect. Statistical properties of the conductance in the critical and localized regimes are demonstrated. Special attention is given to the numerical study of the transport properties of the critical regime and to the numerical verification of the single parameter scaling theory of localization. Numerical data for the critical exponent in the orthogonal models in dimension 2 < d, ≤ 5 are compared with theoretical predictions. We argue that the discrepancy between the theory and numerical data is due to the absence of the self-averaging of transmission quantities. This complicates the analytical analysis of the disordered systems. Finally, theoretical methods of description of weakly disordered systems are explained and their possible generalization to the localized regime is discussed. Since we concentrate on the one-electron propagation at zero temperature, no effects of electron-electron interaction and incoherent scattering are discussed in the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babich, L. P., E-mail: babich@elph.vniief.ru; Bochkov, E. I.; Kutsyk, I. M.
2011-05-15
The mechanism of lightning initiation due to electric field enhancement by the polarization of a conducting channel produced by relativistic runaway electron avalanches triggered by background cosmic radiation has been simulated numerically. It is shown that the fields at which the start of a lightning leader is possible even in the absence of precipitations are locally realized for realistic thundercloud configurations and charges. The computational results agree with the in-situ observations of penetrating radiation enhancement in thunderclouds.
Hyperboloidal evolution of test fields in three spatial dimensions
NASA Astrophysics Data System (ADS)
Zenginoǧlu, Anıl; Kidder, Lawrence E.
2010-06-01
We present the numerical implementation of a clean solution to the outer boundary and radiation extraction problems within the 3+1 formalism for hyperbolic partial differential equations on a given background. Our approach is based on compactification at null infinity in hyperboloidal scri fixing coordinates. We report numerical tests for the particular example of a scalar wave equation on Minkowski and Schwarzschild backgrounds. We address issues related to the implementation of the hyperboloidal approach for the Einstein equations, such as nonlinear source functions, matching, and evaluation of formally singular terms at null infinity.
40 CFR 1065.920 - PEMS calibrations and verifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
....920 PEMS calibrations and verifications. (a) Subsystem calibrations and verifications. Use all the... verifications and analysis. It may also be necessary to limit the range of conditions under which the PEMS can... additional information or analysis to support your conclusions. (b) Overall verification. This paragraph (b...
This ETV program generic verification protocol was prepared and reviewed for the Verification of Pesticide Drift Reduction Technologies project. The protocol provides a detailed methodology for conducting and reporting results from a verification test of pesticide drift reductio...
30 CFR 250.913 - When must I resubmit Platform Verification Program plans?
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTINENTAL SHELF Platforms and Structures Platform Verification Program § 250.913 When must I resubmit Platform Verification Program plans? (a) You must resubmit any design verification, fabrication... 30 Mineral Resources 2 2011-07-01 2011-07-01 false When must I resubmit Platform Verification...
30 CFR 250.909 - What is the Platform Verification Program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What is the Platform Verification Program? 250... Platforms and Structures Platform Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms...
Study of techniques for redundancy verification without disrupting systems, phases 1-3
NASA Technical Reports Server (NTRS)
1970-01-01
The problem of verifying the operational integrity of redundant equipment and the impact of a requirement for verification on such equipment are considered. Redundant circuits are examined and the characteristics which determine adaptability to verification are identified. Mutually exclusive and exhaustive categories for verification approaches are established. The range of applicability of these techniques is defined in terms of signal characteristics and redundancy features. Verification approaches are discussed and a methodology for the design of redundancy verification is developed. A case study is presented which involves the design of a verification system for a hypothetical communications system. Design criteria for redundant equipment are presented. Recommendations for the development of technological areas pertinent to the goal of increased verification capabilities are given.
Requirement Assurance: A Verification Process
NASA Technical Reports Server (NTRS)
Alexander, Michael G.
2011-01-01
Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu
2016-12-01
Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.
Numerical simulation of supersonic water vapor jet impinging on a flat plate
NASA Astrophysics Data System (ADS)
Kuzuu, Kazuto; Aono, Junya; Shima, Eiji
2012-11-01
We investigated supersonic water vapor jet impinging on a flat plate through numerical simulation. This simulation is for estimating heating effect of a reusable sounding rocket during vertical landing. The jet from the rocket bottom is supersonic, M=2 to 3, high temperature, T=2000K, and over-expanded. Atmospheric condition is a stationary standard air. The simulation is base on the full Navier-Stokes equations, and the flow is numerically solved by an unstructured compressible flow solver, in-house code LS-FLOW-RG. In this solver, the transport properties of muti-species gas and mass conservation equations of those species are considered. We employed DDES method as a turbulence model. For verification and validation, we also carried out a simulation under the condition of air, and compared with the experimental data. Agreement between our results and the experimental data are satisfactory. Through this simulation, we calculated the flow under some exit pressure conditions, and discuss the effects of pressure ratio on flow structures, heat transfer and so on. Furthermore, we also investigated diffusion effects of water vapor, and we confirmed that these phenomena are generated by the interaction of atmospheric air and affects the heat transfer to the surrounding environment.
Computational Modeling to Predict Fatigue Behavior of NiTi Stents: What Do We Need?
Dordoni, Elena; Petrini, Lorenza; Wu, Wei; Migliavacca, Francesco; Dubini, Gabriele; Pennati, Giancarlo
2015-01-01
NiTi (nickel-titanium) stents are nowadays commonly used for the percutaneous treatment of peripheral arterial disease. However, their effectiveness is still debated in the clinical field. In fact a peculiar cyclic biomechanical environment is created before and after stent implantation, with the risk of device fatigue failure. An accurate study of the device fatigue behavior is of primary importance to ensure a successful stenting procedure. Regulatory authorities recognize the possibility of performing computational analyses instead of experimental tests for the assessment of medical devices. However, confidence in numerical methods is only possible after verification and validation of the models used. For the case of NiTi stents, mechanical properties are strongly dependent on the device dimensions and the whole treatments undergone during manufacturing process. Hence, special attention should be paid to the accuracy of the description of the device geometry and the material properties implementation into the numerical code, as well as to the definition of the fatigue limit. In this paper, a path for setting up an effective numerical model for NiTi stent fatigue assessment is proposed and the results of its application in a specific case study are illustrated. PMID:26011245
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
NASA Astrophysics Data System (ADS)
Lee, Ho-Young; Lee, Se-Hee
2017-08-01
Mechanical deformation, bending deformation, and distributive magnetic loads were evaluated numerically and experimentally for conducting materials excited with high current. Until now, many research works have extensively studied the area of magnetic force and mechanical deformation by using coupled approaches such as multiphysics solvers. In coupled analysis for magnetoelastic problems, some articles and commercial software have presented the resultant mechanical deformation and stress on the body. To evaluate the mechanical deformation, the Lorentz force density method (LZ) and the Maxwell stress tensor method (MX) have been widely used for conducting materials. However, it is difficult to find any experimental verification regarding mechanical deformation or bending deformation due to magnetic force density. Therefore, we compared our numerical results to those from experiments with two parallel conducting bars to verify our numerical setup for bending deformation. Before showing this, the basic and interesting coupled simulation was conducted to test the mechanical deformations by the LZ (body force density) and the MX (surface force density) methods. This resulted in MX gave the same total force as LZ, but the local force distribution in MX introduced an incorrect mechanical deformation in the simulation of a solid conductor.
NASA Astrophysics Data System (ADS)
Herrmann, M.; Velikovich, A. L.; Abarzhi, S. I.
2014-10-01
A study of incompressible two-dimensional Richtmyer-Meshkov instability by means of high-order Eulerian perturbation theory and numerical simulations is reported. Nonlinear corrections to Richtmyer's impulsive formula for the bubble and spike growth rates have been calculated analytically for arbitrary Atwood number and an explicit formula has been obtained for it in the Boussinesq limit. Conditions for early-time acceleration and deceleration of the bubble and the spike have been derived. In our simulations we have solved 2D unsteady Navier-Stokes equations for immiscible incompressible fluids using the finite volume fractional step flow solver NGA developed by, coupled to the level set based interface solver LIT,. The impact of small amounts of viscosity and surface tension on the RMI flow dynamics is studied numerically. Simulation results are compared to the theory to demonstrate successful code verification and highlight the influence of the theory's ideal inviscid flow assumption. Theoretical time histories of the interface curvature at the bubble and spike tip and the profiles of vertical and horizontal velocities have been favorably compared to simulation results, which converge to the theoretical predictions as the Reynolds and Weber numbers are increased. Work supported by the US DOE/NNSA.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Xiang, Yu; Wei, Zhengying; Wei, Pei; Lu, Bingheng; Zhang, Lijuan; Du, Jun
2018-04-01
During selective laser melting (SLM) of K418 powder, the influence of the process parameters, such as laser power P and scanning speed v, on the dynamic thermal behavior and morphology of the melted tracks was investigated numerically. A 3D finite difference method was established to predict the dynamic thermal behavior and flow mechanism of K418 powder irradiated by a Gaussian laser beam. A three-dimensional randomly packed powder bed composed of spherical particles was established by discrete element method. The powder particle information including particle size distribution and packing density were taken into account. The volume shrinkage and temperature-dependent thermophysical parameters such as thermal conductivity, specific heat, and other physical properties were also considered. The volume of fluid method was applied to reconstruct the free surface of the molten pool during SLM. The geometrical features, continuity boundaries, and irregularities of the molten pool were proved to be largely determined by the laser energy density. The numerical results are in good agreement with the experiments, which prove to be reasonable and effective. The results provide us some in-depth insight into the complex physical behavior during SLM and guide the optimization of process parameters.
Artificial tektites: an experimental technique for capturing the shapes of spinning drops
NASA Astrophysics Data System (ADS)
Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.
2015-01-01
Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.
Investigation of micromixing by acoustically oscillated sharp-edges
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-01-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292
Investigation of micromixing by acoustically oscillated sharp-edges.
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-03-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, P.; Suchyta, E.; Huff, E.
2015-03-31
We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modeling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Sciencemore » Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modeling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1 degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.« less
Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-08-28
Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.
Plasma protein absolute quantification by nano-LC Q-TOF UDMSE for clinical biomarker verification
ILIES, MARIA; IUGA, CRISTINA ADELA; LOGHIN, FELICIA; DHOPLE, VISHNU MUKUND; HAMMER, ELKE
2017-01-01
Background and aims Proteome-based biomarker studies are targeting proteins that could serve as diagnostic, prognosis, and prediction molecules. In the clinical routine, immunoassays are currently used for the absolute quantification of such biomarkers, with the major limitation that only one molecule can be targeted per assay. The aim of our study was to test a mass spectrometry based absolute quantification method for the verification of plasma protein sets which might serve as reliable biomarker panels for the clinical practice. Methods Six EDTA plasma samples were analyzed after tryptic digestion using a high throughput data independent acquisition nano-LC Q-TOF UDMSE proteomics approach. Synthetic Escherichia coli standard peptides were spiked in each sample for the absolute quantification. Data analysis was performed using ProgenesisQI v2.0 software (Waters Corporation). Results Our method ensured absolute quantification of 242 non redundant plasma proteins in a single run analysis. The dynamic range covered was 105. 86% were represented by classical plasma proteins. The overall median coefficient of variation was 0.36, while a set of 63 proteins was found to be highly stable. Absolute protein concentrations strongly correlated with values reviewed in the literature. Conclusions Nano-LC Q-TOF UDMSE proteomic analysis can be used for a simple and rapid determination of absolute amounts of plasma proteins. A large number of plasma proteins could be analyzed, while a wide dynamic range was covered with low coefficient of variation at protein level. The method proved to be a reliable tool for the quantification of protein panel for biomarker verification in the clinical practice. PMID:29151793
Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, Madeline Louise; McMath, Garrett Earl
Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less
Verification of Plutonium Content in PuBe Sources Using MCNP® 6.2.0 Beta with TENDL 2012 Libraries
Lockhart, Madeline Louise; McMath, Garrett Earl
2017-10-26
Although the production of PuBe neutron sources has discontinued, hundreds of sources with unknown or inaccurately declared plutonium content are in existence around the world. Institutions have undertaken the task of assaying these sources, measuring, and calculating the isotopic composition, plutonium content, and neutron yield. The nominal plutonium content, based off the neutron yield per gram of pure 239Pu, has shown to be highly inaccurate. New methods of measuring the plutonium content allow a more accurate estimate of the true Pu content, but these measurements need verification. Using the TENDL 2012 nuclear data libraries, MCNP6 has the capability to simulatemore » the (α, n) interactions in a PuBe source. Theoretically, if the source is modeled according to the plutonium content, isotopic composition, and other source characteristics, the calculated neutron yield in MCNP can be compared to the experimental yield, offering an indication of the accuracy of the declared plutonium content. In this study, three sets of PuBe sources from various backgrounds were modeled in MCNP6 1.2 Beta, according to the source specifications dictated by the individuals who assayed the source. Verification of the source parameters with MCNP6 also serves as a means to test the alpha transport capabilities of MCNP6 1.2 Beta with TENDL 2012 alpha transport libraries. Finally, good agreement in the comparison would indicate the accuracy of the source parameters in addition to demonstrating MCNP's capabilities in simulating (α, n) interactions.« less
Melchior, P.; Suchyta, E.; Huff, E.; ...
2015-03-31
We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Sciencemore » Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. Additionally, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.« less
Shielded-Twisted-Pair Cable Model for Chafe Fault Detection via Time-Domain Reflectometry
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Timucin, Dogan A.; Wheeler, Kevin R.
2012-01-01
This report details the development, verification, and validation of an innovative physics-based model of electrical signal propagation through shielded-twisted-pair cable, which is commonly found on aircraft and offers an ideal proving ground for detection of small holes in a shield well before catastrophic damage occurs. The accuracy of this model is verified through numerical electromagnetic simulations using a commercially available software tool. The model is shown to be representative of more realistic (analytically intractable) cable configurations as well. A probabilistic framework is developed for validating the model accuracy with reflectometry data obtained from real aircraft-grade cables chafed in the laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Zyvoloski, George Anthony; Weaver, Douglas James
The simulation work presented in this report supports DOE-NE Used Fuel Disposition Campaign (UFDC) goals related to the development of drift scale in-situ field testing of heat-generating nuclear waste (HGNW) in salt formations. Numerical code verification and validation is an important part of the lead-up to field testing, allowing exploration of potential heater emplacement designs, monitoring locations, and perhaps most importantly the ability to predict heat and mass transfer around an evolving test. Such predictions are crucial for the design and location of sampling and monitoring that can be used to validate our understanding of a drift scale test thatmore » is likely to span several years.« less
Experimental verification and simulation of negative index of refraction using Snell's law.
Parazzoli, C G; Greegor, R B; Li, K; Koltenbah, B E C; Tanielian, M
2003-03-14
We report the results of a Snell's law experiment on a negative index of refraction material in free space from 12.6 to 13.2 GHz. Numerical simulations using Maxwell's equations solvers show good agreement with the experimental results, confirming the existence of negative index of refraction materials. The index of refraction is a function of frequency. At 12.6 GHz we measure and compute the real part of the index of refraction to be -1.05. The measurements and simulations of the electromagnetic field profiles were performed at distances of 14lambda and 28lambda from the sample; the fields were also computed at 100lambda.
Measurement of Surface Interfacial Tension as a Function of Temperature Using Pendant Drop Images
NASA Astrophysics Data System (ADS)
Yakhshi-Tafti, Ehsan; Kumar, Ranganathan; Cho, Hyoung J.
2011-10-01
Accurate and reliable measurements of surface tension at the interface of immiscible phases are crucial to understanding various physico-chemical reactions taking place between those. Based on the pendant drop method, an optical (graphical)-numerical procedure was developed to determine surface tension and its dependency on the surrounding temperature. For modeling and experimental verification, chemically inert and thermally stable perfluorocarbon (PFC) oil and water was used. Starting with geometrical force balance, governing equations were derived to provide non-dimensional parameters which were later used to extract values for surface tension. Comparative study verified the accuracy and reliability of the proposed method.
Sliding Mode Control of a Slewing Flexible Beam
NASA Technical Reports Server (NTRS)
Wilson, David G.; Parker, Gordon G.; Starr, Gregory P.; Robinett, Rush D., III
1997-01-01
An output feedback sliding mode controller (SMC) is proposed to minimize the effects of vibrations of slewing flexible manipulators. A spline trajectory is used to generate ideal position and velocity commands. Constrained nonlinear optimization techniques are used to both calibrate nonlinear models and determine optimized gains to produce a rest-to-rest, residual vibration-free maneuver. Vibration-free maneuvers are important for current and future NASA space missions. This study required the development of the nonlinear dynamic system equations of motion; robust control law design; numerical implementation; system identification; and verification using the Sandia National Laboratories flexible robot testbed. Results are shown for a slewing flexible beam.
Making the Hubble Space Telescope servicing mission safe
NASA Technical Reports Server (NTRS)
Bahr, N. J.; Depalo, S. V.
1992-01-01
The implementation of the HST system safety program is detailed. Numerous safety analyses are conducted through various phases of design, test, and fabrication, and results are presented to NASA management for discussion during dedicated safety reviews. Attention is given to the system safety assessment and risk analysis methodologies used, i.e., hazard analysis, fault tree analysis, and failure modes and effects analysis, and to how they are coupled with engineering and test analysis for a 'synergistic picture' of the system. Some preliminary safety analysis results, showing the relationship between hazard identification, control or abatement, and finally control verification, are presented as examples of this safety process.
Noise-Aided Logic in an Electronic Analog of Synthetic Genetic Networks
Hellen, Edward H.; Dana, Syamal K.; Kurths, Jürgen; Kehler, Elizabeth; Sinha, Sudeshna
2013-01-01
We report the experimental verification of noise-enhanced logic behaviour in an electronic analog of a synthetic genetic network, composed of two repressors and two constitutive promoters. We observe good agreement between circuit measurements and numerical prediction, with the circuit allowing for robust logic operations in an optimal window of noise. Namely, the input-output characteristics of a logic gate is reproduced faithfully under moderate noise, which is a manifestation of the phenomenon known as Logical Stochastic Resonance. The two dynamical variables in the system yield complementary logic behaviour simultaneously. The system is easily morphed from AND/NAND to OR/NOR logic. PMID:24124531
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Massive black hole and gas dynamics in galaxy nuclei mergers - I. Numerical implementation
NASA Astrophysics Data System (ADS)
Lupi, Alessandro; Haardt, Francesco; Dotti, Massimo
2015-01-01
Numerical effects are known to plague adaptive mesh refinement (AMR) codes when treating massive particles, e.g. representing massive black holes (MBHs). In an evolving background, they can experience strong, spurious perturbations and then follow unphysical orbits. We study by means of numerical simulations the dynamical evolution of a pair MBHs in the rapidly and violently evolving gaseous and stellar background that follows a galaxy major merger. We confirm that spurious numerical effects alter the MBH orbits in AMR simulations, and show that numerical issues are ultimately due to a drop in the spatial resolution during the simulation, drastically reducing the accuracy in the gravitational force computation. We therefore propose a new refinement criterion suited for massive particles, able to solve in a fast and precise way for their orbits in highly dynamical backgrounds. The new refinement criterion we designed enforces the region around each massive particle to remain at the maximum resolution allowed, independently upon the local gas density. Such maximally resolved regions then follow the MBHs along their orbits, and effectively avoids all spurious effects caused by resolution changes. Our suite of high-resolution, AMR hydrodynamic simulations, including different prescriptions for the sub-grid gas physics, shows that the new refinement implementation has the advantage of not altering the physical evolution of the MBHs, accounting for all the non-trivial physical processes taking place in violent dynamical scenarios, such as the final stages of a galaxy major merger.
NASA Astrophysics Data System (ADS)
González-Rojí, Santos J.; Sáenz, Jon; Ibarra-Berastegi, Gabriel
2016-04-01
A numerical downscaling exercise over the Iberian Peninsula has been run nesting the WRF model inside ERA Interim. The Iberian Peninsula has been covered by a 15km x 15 km grid with 51 vertical levels. Two model configurations have been tested in two experiments spanning the period 2010-2014 after a one year spin-up (2009). In both cases, the model uses high resolution daily-varying SST fields and the Noah land surface model. In the first experiment (N), after the model is initialised, boundary conditions drive the model, as usual in numerical downscaling experiments. The second experiment (D) is configured the same way as the N case, but 3DVAR data assimilation is run every six hours (00Z, 06Z, 12Z and 18Z) using observations obtained from the PREPBUFR dataset (NCEP ADP Global Upper Air and Surface Weather Observations) using a 120' window around analysis times. For the data assimilation experiment (D), seasonally (monthly) varying background error covariance matrices have been prepared according to the parameterisations used and the mesoscale model domain. For both N and D runs, the moisture balance of the model runs has been evaluated over the Iberian Peninsula, both internally according to the model results (moisture balance in the model) and also in terms of the observed moisture fields from observational datasets (particularly precipitable water and precipitation from observations). Verification has been performed both at the daily and monthly time scales. The verification has also been performed for ERA Interim, the driving coarse-scale dataset used to drive the regional model too. Results show that the leading terms that must be considered over the area are the tendency in the precipitable water column, the divergence of moisture flux, evaporation (computed from latent heat flux at the surface) and precipitation. In the case of ERA Interim, the divergence of Qc is also relevant, although still a minor player in the moisture balance. Both mesoscale model runs are more effective at closing the moisture balance over the whole Iberian Peninsula than ERA Interim. The N experiment (no data assimilation) shows a better closure than the D case, as could be expected from the lack of analysis increments in it. This result is robust both at the daily and monthly time scales. Both ERA Interim and the D experiment produce a negative residual in the balance equation (compatible with excess evaporation or increased convergence of moisture over the Iberian Peninsula). This is a result of the data assimilation process in the D dataset, since in the N experiment the residual is mainly positive. The seasonal cycle of evaporation is much closer in the D experiment to the one in ERA Interim than in the N case, with a higher evaporation during summer months. However, both regional climate model runs show a lower evaporation rate than ERA Interim, particularly during summer months.
30 CFR 250.909 - What is the Platform Verification Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What is the Platform Verification Program? 250... Verification Program § 250.909 What is the Platform Verification Program? The Platform Verification Program is the MMS approval process for ensuring that floating platforms; platforms of a new or unique design...
Property-driven functional verification technique for high-speed vision system-on-chip processor
NASA Astrophysics Data System (ADS)
Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian
2017-04-01
The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.
Hydrologic data-verification management program plan
Alexander, C.W.
1982-01-01
Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akerib, DS; Alsum, S; Araújo, HM
The LUX experiment has performed searches for dark matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived frommore » $${1.4}\\times 10^{4}\\;\\mathrm{kg\\,days}$$ of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.« less
NASA Astrophysics Data System (ADS)
Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.; LUX Collaboration
2018-05-01
The LUX experiment has performed searches for dark-matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived from 1.4 ×104 kg days of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Arthur, Grant E.; Koch, Grady J.; Kavaya, Michael J.
2012-01-01
Two different noise whitening methods in airborne wind profiling with a pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. In order to provide accurate wind parameter estimates from the airborne lidar data acquired during the NASA Genesis and Rapid Intensification Processes (GRIP) campaign in 2010, the adverse effects of background instrument noise must be compensated properly in the early stage of data processing. The results of the two methods are presented using selected GRIP data and compared with the dropsonde data for verification purposes.
Guidelines for preparing software user documentation
NASA Technical Reports Server (NTRS)
Miller, Diane F.
1987-01-01
Clear, easy-to-use software user's manuals make strong demands on special technical communication techniques. Principles and guidelines are given for analyzing the audience and dealing with wide-ranging backgrounds of potential users. Types of information to be included in a complete manual are suggested, with a technique for creating a user-oriented rather than process-oriented organization. Accuracy verification is emphasized. Simple tips are gievn for formatting for quick comprehension and reference, for deciding on packaging, for creating helpful illustrations and examples, and for setting up clear and consistent conventions. Simple guidelines are offered for writing clearly and concisely and for editing.
FAST Modularization Framework for Wind Turbine Simulation: Full-System Linearization
Jonkman, Jason M.; Jonkman, Bonnie J.
2016-10-03
The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well-established methods and tools for analyzing linear systems. Here, this paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.
FAST modularization framework for wind turbine simulation: full-system linearization
NASA Astrophysics Data System (ADS)
Jonkman, J. M.; Jonkman, B. J.
2016-09-01
The wind engineering community relies on multiphysics engineering software to run nonlinear time-domain simulations e.g. for design-standards-based loads analysis. Although most physics involved in wind energy are nonlinear, linearization of the underlying nonlinear system equations is often advantageous to understand the system response and exploit well- established methods and tools for analyzing linear systems. This paper presents the development and verification of the new linearization functionality of the open-source engineering tool FAST v8 for land-based wind turbines, as well as the concepts and mathematical background needed to understand and apply it correctly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toltz, Allison; Hoesl, Michaela; Schuemann, Jan
Purpose: A method to refine the implementation of an in vivo, adaptive proton therapy range verification methodology was investigated. Simulation experiments and in-phantom measurements were compared to validate the calibration procedure of a time-resolved diode dosimetry technique. Methods: A silicon diode array system has been developed and experimentally tested in phantom for passively scattered proton beam range verification by correlating properties of the detector signal to the water equivalent path length (WEPL). The implementation of this system requires a set of calibration measurements to establish a beam-specific diode response to WEPL fit for the selected ‘scout’ beam in a solidmore » water phantom. This process is both tedious, as it necessitates a separate set of measurements for every ‘scout’ beam that may be appropriate to the clinical case, as well as inconvenient due to limited access to the clinical beamline. The diode response to WEPL relationship for a given ‘scout’ beam may be determined within a simulation environment, facilitating the applicability of this dosimetry technique. Measurements for three ‘scout’ beams were compared against simulated detector response with Monte Carlo methods using the Tool for Particle Simulation (TOPAS). Results: Detector response in water equivalent plastic was successfully validated against simulation for spread out Bragg peaks of range 10 cm, 15 cm, and 21 cm (168 MeV, 177 MeV, and 210 MeV) with adjusted R{sup 2} of 0.998. Conclusion: Feasibility has been shown for performing calibration of detector response for a given ‘scout’ beam through simulation for the time resolved diode dosimetry technique.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barahona, B.; Jonkman, J.; Damiani, R.
2014-12-01
Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshoremore » Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.« less
NASA Astrophysics Data System (ADS)
Romanelli, Fabio; Vaccari, Franco; Altin, Giorgio; Panza, Giuliano
2016-04-01
The procedure we developed, and applied to a few relevant cases, leads to the seismic verification of a building by: a) use of a scenario based neodeterministic approach (NDSHA) for the calculation of the seismic input, and b) control of the numerical modeling of an existing building, using free vibration measurements of the real structure. The key point of this approach is the strict collaboration, from the seismic input definition to the monitoring of the response of the building in the calculation phase, of the seismologist and the civil engineer. The vibrometry study allows the engineer to adjust the computational model in the direction suggested by the experimental result of a physical measurement. Once the model has been calibrated by vibrometric analysis, one can select in the design spectrum the proper range of periods of interest for the structure. Then, the realistic values of spectral acceleration, which include the appropriate amplification obtained through the modeling of a "scenario" input to be applied to the final model, can be selected. Generally, but not necessarily, the "scenario" spectra lead to higher accelerations than those deduced by taking the spectra from the national codes (i.e. NTC 2008, for Italy). The task of the verifier engineer is to act so that the solution of the verification is conservative and realistic. We show some examples of the application of the procedure to some relevant (e.g. schools) buildings of the Trieste Province. The adoption of the scenario input has given in most of the cases an increase of critical elements that have to be taken into account in the design of reinforcements. However, the higher cost associated with the increase of elements to reinforce is reasonable, especially considering the important reduction of the risk level.
The ShakeOut earthquake scenario: Verification of three simulation sets
Bielak, J.; Graves, R.W.; Olsen, K.B.; Taborda, R.; Ramirez-Guzman, L.; Day, S.M.; Ely, G.P.; Roten, D.; Jordan, T.H.; Maechling, P.J.; Urbanic, J.; Cui, Y.; Juve, G.
2010-01-01
This paper presents a verification of three simulations of the ShakeOut scenario, an Mw 7.8 earthquake on a portion of the San Andreas fault in southern California, conducted by three different groups at the Southern California Earthquake Center using the SCEC Community Velocity Model for this region. We conducted two simulations using the finite difference method, and one by the finite element method, and performed qualitative and quantitative comparisons between the corresponding results. The results are in good agreement with each other; only small differences occur both in amplitude and phase between the various synthetics at ten observation points located near and away from the fault-as far as 150 km away from the fault. Using an available goodness-of-fit criterion all the comparisons scored above 8, with most above 9.2. This score would be regarded as excellent if the measurements were between recorded and synthetic seismograms. We also report results of comparisons based on time-frequency misfit criteria. Results from these two criteria can be used for calibrating the two methods for comparing seismograms. In those cases in which noticeable discrepancies occurred between the seismograms generated by the three groups, we found that they were the product of inherent characteristics of the various numerical methods used and their implementations. In particular, we found that the major source of discrepancy lies in the difference between mesh and grid representations of the same material model. Overall, however, even the largest differences in the synthetic seismograms are small. Thus, given the complexity of the simulations used in this verification, it appears that the three schemes are consistent, reliable and sufficiently accurate and robust for use in future large-scale simulations. ?? 2009 The Authors Journal compilation ?? 2009 RAS.
NASA Astrophysics Data System (ADS)
Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.
2009-04-01
The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.
Morris, K
2017-06-01
The dose of radiotherapy is often verified by measuring the dose of radiation at specific points within a phantom. The presence of high-density implant materials such as titanium, however, may cause complications both during calculation and delivery of the dose. Numerous studies have reported photon/electron backscatter and alteration of the dose by high-density implants, but we know of no evidence of a dosimetry phantom that incorporates high density implants or fixtures. The aim of the study was to design and manufacture a tissue-equivalent head phantom for use in verification of the dose in radiotherapy using a combination of traditional laboratory materials and techniques and 3-dimensional technology that can incorporate titanium maxillofacial devices. Digital designs were used together with Mimics® 18.0 (Materialise NV) and FreeForm® software. DICOM data were downloaded and manipulated into the final pieces of the phantom mould. Three-dimensional digital objects were converted into STL files and exported for additional stereolithography. Phantoms were constructed in four stages: material testing and selection, design of a 3-dimensional mould, manufacture of implants, and final fabrication of the phantom using traditional laboratory techniques. Three tissue-equivalent materials were found and used to successfully manufacture a suitable phantom with interchangeable sections that contained three versions of titanium maxillofacial implants. Maxillofacial and other materials can be used to successfully construct a head phantom with interchangeable titanium implant sections for use in verification of doses of radiotherapy. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
The effect of different methods to compute N on estimates of mixing in stratified flows
NASA Astrophysics Data System (ADS)
Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey
2017-11-01
The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.
How we compute N matters to estimates of mixing in stratified flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.
We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less
How we compute N matters to estimates of mixing in stratified flows
Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...
2017-10-13
We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less
Inertia-gravity wave radiation from the elliptical vortex in the f-plane shallow water system
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko
2017-04-01
Inertia-gravity wave (IGW) radiation from the elliptical vortex is investigated in the f-plane shallow water system. The far field of IGW is analytically derived for the case of an almost circular Kirchhoff vortex with a small aspect ratio. Cyclone-anticyclone asymmetry appears at finite values of the Rossby number (Ro) caused by the source originating in the Coriolis acceleration. While the intensity of IGWs from the cyclone monotonically decreases as f increases, that from the anticyclone increases as f increases for relatively smaller f and has a local maximum at intermediate f. A numerical experiment is conducted on a model using a spectral method in an unbounded domain. The numerical results agree quite well with the analytical ones for elliptical vortices with small aspect ratios, implying that the derived analytical forms are useful for the verification of the numerical model. For elliptical vortices with larger aspect ratios, however, significant deviation from the analytical estimates appears. The intensity of IGWs radiated in the numerical simulation is larger than that estimated analytically. The reason is that the source of IGWs is amplified during the time evolution because the shape of the vortex changes from ideal ellipse to elongated with filaments. Nevertheless, cyclone-anticyclone asymmetry similar to the analytical estimate appears in all the range of aspect ratios, suggesting that this asymmetry is a robust feature.
Karaton, Muhammet
2014-01-01
A beam-column element based on the Euler-Bernoulli beam theory is researched for nonlinear dynamic analysis of reinforced concrete (RC) structural element. Stiffness matrix of this element is obtained by using rigidity method. A solution technique that included nonlinear dynamic substructure procedure is developed for dynamic analyses of RC frames. A predicted-corrected form of the Bossak-α method is applied for dynamic integration scheme. A comparison of experimental data of a RC column element with numerical results, obtained from proposed solution technique, is studied for verification the numerical solutions. Furthermore, nonlinear cyclic analysis results of a portal reinforced concrete frame are achieved for comparing the proposed solution technique with Fibre element, based on flexibility method. However, seismic damage analyses of an 8-story RC frame structure with soft-story are investigated for cases of lumped/distributed mass and load. Damage region, propagation, and intensities according to both approaches are researched. PMID:24578667
Park, BuSik; Neuberger, Thomas; Webb, Andrew G.; Bigler, Don C.; Collins, Christopher M.
2009-01-01
A comparison of methods to decrease RF power dissipation and related heating in conductive samples using passive conductors surrounding a sample in a solenoid coil is presented. Full-Maxwell finite difference time domain numerical calculations were performed to evaluate the effect of the passive conductors by calculating conservative and magnetically-induced electric field and magnetic field distributions. To validate the simulation method, experimental measurements of temperature increase were conducted using a solenoidal coil (diameter 3 mm), a saline sample (10 mM NaCl) and passive copper shielding wires (50 μm diameter). The temperature increase was 58% lower with the copper wires present for several different input powers to the coil. This was in good agreement with simulation for the same geometry, which indicated 57% lower power dissipated in the sample with conductors present. Simulations indicate that some designs should be capable of reducing temperature increase by more than 85%. PMID:19879784
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.
Mars Exploration Rover Terminal Descent Mission Modeling and Simulation
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad; Queen, Eric M.
2004-01-01
Because of NASA's added reliance on simulation for successful interplanetary missions, the MER mission has developed a detailed EDL trajectory modeling and simulation. This paper summarizes how the MER EDL sequence of events are modeled, verification of the methods used, and the inputs. This simulation is built upon a multibody parachute trajectory simulation tool that has been developed in POST I1 that accurately simulates the trajectory of multiple vehicles in flight with interacting forces. In this model the parachute and the suspended bodies are treated as 6 Degree-of-Freedom (6 DOF) bodies. The terminal descent phase of the mission consists of several Entry, Descent, Landing (EDL) events, such as parachute deployment, heatshield separation, deployment of the lander from the backshell, deployment of the airbags, RAD firings, TIRS firings, etc. For an accurate, reliable simulation these events need to be modeled seamlessly and robustly so that the simulations will remain numerically stable during Monte-Carlo simulations. This paper also summarizes how the events have been modeled, the numerical issues, and modeling challenges.
Höfler, K; Schwarzer, S
2000-06-01
Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.
Numerical proof of stability of roll waves in the small-amplitude limit for inclined thin film flow
NASA Astrophysics Data System (ADS)
Barker, Blake
2014-10-01
We present a rigorous numerical proof based on interval arithmetic computations categorizing the linearized and nonlinear stability of periodic viscous roll waves of the KdV-KS equation modeling weakly unstable flow of a thin fluid film on an incline in the small-amplitude KdV limit. The argument proceeds by verification of a stability condition derived by Bar-Nepomnyashchy and Johnson-Noble-Rodrigues-Zumbrun involving inner products of various elliptic functions arising through the KdV equation. One key point in the analysis is a bootstrap argument balancing the extremely poor sup norm bounds for these functions against the extremely good convergence properties for analytic interpolation in order to obtain a feasible computation time. Another is the way of handling analytic interpolation in several variables by a two-step process carving up the parameter space into manageable pieces for rigorous evaluation. These and other general aspects of the analysis should serve as blueprints for more general analyses of spectral stability.
NASA Technical Reports Server (NTRS)
Pierson, W. J.; Salfi, R. E.
1978-01-01
Significant wave heights estimated from the shape of the return pulse wave form of the altimeter on GEOS-3 for forty-four orbit segments obtained during 1975 and 1976 are compared with the significant wave heights specified by the spectral ocean wave model (SOWM), which is the presently operational numerical wave forecasting model at the Fleet Numerical Weather Central. Except for a number of orbit segments with poor agreement and larger errors, the SOWM specifications tended to be biased from 0.5 to 1.0 meters too low and to have RMS errors of 1.0 to 1.4 meters. The much fewer larger errors can be attributed to poor wind data for some parts of the Northern Hemisphere oceans. The bias can be attributed to the somewhat too light winds used to generate the waves in the model. Other sources of error are identified in the equatorial and trade wind areas.
Development of hybrid method for the prediction of underwater propeller noise
NASA Astrophysics Data System (ADS)
Seol, Hanshin; Suh, Jung-Chun; Lee, Soogab
2005-11-01
Noise reduction and control is an important problem in the performance of underwater acoustic systems and in the habitability of the passenger ship for crew and passenger. Furthermore, sound generated by a propeller is critical in underwater detection and it is often related to the survivability of the vessel especially for military purpose. This paper presents a numerical study on the non-cavitating and blade sheet cavitation noises of the underwater propeller. A brief summary of numerical method with verification and results are presented. The noise is predicted using time-domain acoustic analogy. The flow field is analyzed with potential-based panel method, and then the time-dependent pressure and sheet cavity volume data are used as the input for Ffowcs Williams-Hawkings formulation to predict the far-field acoustics. Noise characteristics are presented according to noise sources and conditions. Through this study, the dominant noise source of the underwater propeller is analyzed, which will provide a basis for proper noise control strategies.
NASA Astrophysics Data System (ADS)
Lui, E. W.; Palanisamy, S.; Dargusch, M. S.; Xia, K.
2017-12-01
The oxide dissolution and oxygen diffusion during annealing of Ti-6Al-4V solid-state recycled from machining chips by equal-channel angular pressing (ECAP) have been investigated using nanoindentation and numerical modeling. The hardness profile from nanoindentation was converted into the oxygen concentration distribution using the Fleisher and Friedel model. An iterative fitting method was then employed to revise the ideal model proposed previously, leading to correct predictions of the oxide dissolution times and oxygen concentration profiles and verifying nanoindentation as an effective method to measure local oxygen concentrations. Recrystallization started at the prior oxide boundaries where local strains were high from the severe plastic deformation incurred in the ECAP recycling process, forming a band of ultrafine grains whose growth was retarded by solute dragging thanks to high oxygen concentrations. The recrystallized fine-grained region would advance with time to eventually replace the lamellar structure formed during ECAP.
A Comparison of Computed and Experimental Flowfields of the RAH-66 Helicopter
NASA Technical Reports Server (NTRS)
vanDam, C. P.; Budge, A. M.; Duque, E. P. N.
1996-01-01
This paper compares and evaluates numerical and experimental flowfields of the RAH-66 Comanche helicopter. The numerical predictions were obtained by solving the Thin-Layer Navier-Stokes equations. The computations use actuator disks to investigate the main and tail rotor effects upon the fuselage flowfield. The wind tunnel experiment was performed in the 14 x 22 foot facility located at NASA Langley. A suite of flow conditions, rotor thrusts and fuselage-rotor-tail configurations were tested. In addition, the tunnel model and the computational geometry were based upon the same CAD definition. Computations were performed for an isolated fuselage configuration and for a rotor on configuration. Comparisons between the measured and computed surface pressures show areas of correlation and some discrepancies. Local areas of poor computational grid-quality and local areas of geometry differences account for the differences. These calculations demonstrate the use of advanced computational fluid dynamic methodologies towards a flight vehicle currently under development. It serves as an important verification for future computed results.
NASA Astrophysics Data System (ADS)
Wang, Fang; Liu, Chang; Liu, Xiaoning; Niu, Tiaoming; Wang, Jing; Mei, Zhonglei; Qin, Jiayong
2017-06-01
In this paper, a flat and incident angle independence absorbing material is proposed and numerically verified in the optical spectrum. A homogeneous and anisotropic dielectric slab as a non-reflecting layer is first reviewed, and a feasible realization strategy of the slab is then given by using layered isotropic materials. When the loss components of the constitutive materials are not zero, the slab will work as an angle insensitive absorbing layer, and the absorption rate augments with increase of the losses. As the numerical verifications, the field distributions of a metallic cylinder and a triangular metallic object individually covered by the designed absorbing layer are demonstrated. The simulation results show that the designed absorbing layer can efficiently absorb the incident waves with the property of incident angle independence at the operation frequency. This homogeneous slab can be used in one and two dimensional situations for the realization of an invisibility cloak, a carpet cloak and even a skin cloak, if it is used to conformally cover target objects.
McCarthy, J. Daniel; Barnes, Lianne N.; Alvarez, Bryan D.; Caplovitz, Gideon Paul
2013-01-01
In grapheme-color synesthesia, graphemes (e.g., numbers or letters) evoke color experiences. It is generally reported that the opposite is not true: colors will not generate experiences of graphemes or their associated information. However, recent research has provided evidence that colors can implicitly elicit symbolic representations of associated graphemes. Here, we examine if these representations can be cognitively accessed. Using a mathematical verification task replacing graphemes with color patches, we find that synesthetes can verify such problems with colors as accurately as with graphemes. Doing so, however, takes time: ~250ms per color. Moreover, we find minimal reaction time switch-costs for switching between computing with graphemes and colors. This demonstrates that given specific task demands, synesthetes can cognitively access numerical information elicited by physical colors, and they do so as accurately as with graphemes. We discuss these results in the context of possible cognitive strategies used to access the information. PMID:24100131
The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.
Kumar, Mohit; Yadav, Shiv Prasad
2012-07-01
In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1980-01-01
A one dimensional model for studying the thermal dynamics of cooling lakes was developed and verified. The model is essentially a set of partial differential equations which are solved by finite difference methods. The model includes the effects of variation of area with depth, surface heating due to solar radiation absorbed at the upper layer, and internal heating due to the transmission of solar radiation to the sub-surface layers. The exchange of mechanical energy between the lake and the atmosphere is included through the coupling of thermal diffusivity and wind speed. The effects of discharge and intake by power plants are also included. The numerical model was calibrated by applying it to Cayuga Lake. The model was then verified through a long term simulation using Lake Keowee data base. The comparison between measured and predicted vertical temperature profiles for the nine years is good. The physical limnology of Lake Keowee is presented through a set of graphical representations of the measured data base.
Detailed numerical simulations of laser cooling processes
NASA Technical Reports Server (NTRS)
Ramirez-Serrano, J.; Kohel, J.; Thompson, R.; Yu, N.
2001-01-01
We developed a detailed semiclassical numerical code of the forces applied on atoms in optical and magnetic fields to increase the understanding of the different roles that light, atomic collisions, background pressure, and number of particles play in experiments with laser cooled and trapped atoms.