Lance, Blake W.; Smith, Barton L.
2016-06-23
Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less
A One-group, One-dimensional Transport Benchmark in Cylindrical Geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barry Ganapol; Abderrafi M. Ougouag
A 1-D, 1-group computational benchmark in cylndrical geometry is described. This neutron transport benchmark is useful for evaluating reactor concepts that possess azimuthal symmetry such as a pebble-bed reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.
New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less
Simulations of hypervelocity impacts for asteroid deflection studies
NASA Astrophysics Data System (ADS)
Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.
2016-12-01
The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.
MatchingLand, geospatial data testbed for the assessment of matching methods.
Xavier, Emerson M A; Ariza-López, Francisco J; Ureña-Cámara, Manuel A
2017-12-05
This article presents datasets prepared with the aim of helping the evaluation of geospatial matching methods for vector data. These datasets were built up from mapping data produced by official Spanish mapping agencies. The testbed supplied encompasses the three geometry types: point, line and area. Initial datasets were submitted to geometric transformations in order to generate synthetic datasets. These transformations represent factors that might influence the performance of geospatial matching methods, like the morphology of linear or areal features, systematic transformations, and random disturbance over initial data. We call our 11 GiB benchmark data 'MatchingLand' and we hope it can be useful for the geographic information science research community.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
I/O-Efficient Scientific Computation Using TPIE
NASA Technical Reports Server (NTRS)
Vengroff, Darren Erik; Vitter, Jeffrey Scott
1996-01-01
In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.
NASA Astrophysics Data System (ADS)
Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.
2016-05-01
Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
A call for benchmarking transposable element annotation methods.
Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu
2015-01-01
DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.
The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example
ERIC Educational Resources Information Center
Steyn, H. J.
2015-01-01
Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…
Uav Cameras: Overview and Geometric Calibration Benchmark
NASA Astrophysics Data System (ADS)
Cramer, M.; Przybilla, H.-J.; Zurhorst, A.
2017-08-01
Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.
Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’
NASA Astrophysics Data System (ADS)
Yegin, Gultekin
2018-02-01
In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
A Methodology for Benchmarking Relational Database Machines,
1984-01-01
user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
2015-01-01
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Peng, Chia-Yen; Wang, Dongdong; Chen, Jiun-Shyan
2012-01-01
A wheel experiencing sinkage and slippage events poses a high risk to rover missions as evidenced by recent mobility challenges on the Mars Exploration Rover (MER) project. Because several factors contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc., there are significant benefits to modeling these events to a sufficient degree of complexity. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree finite element approaches enable simulations that capture sufficient detail of wheel-soil interaction while remaining computationally feasible. This study demonstrates some of the large deformation modeling capability of meshfree methods and the realistic solutions obtained by accounting for the soil material properties. A benchmark wheel-soil interaction problem is developed and analyzed using a specific class of meshfree methods called Reproducing Kernel Particle Method (RKPM). The benchmark problem is also analyzed using a commercially available finite element approach with Lagrangian meshing for comparison. RKPM results are comparable to classical pressure-sinkage terramechanics relationships proposed by Bekker-Wong. Pending experimental calibration by future work, the meshfree modeling technique will be a viable simulation tool for trade studies assisting rover wheel design.
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, M. L.
2014-07-01
SolCalc is a software suite that computes and displays magnetic fields generated by a three dimensional (3D) solenoid system. Examples of such systems are the Mu2e magnet system and Helical Solenoids for muon cooling systems. SolCalc was originally coded in Matlab, and later upgraded to a compiled version (called MEX) to improve solving speed. Matlab was chosen because its graphical capabilities represent an attractive feature over other computer languages. Solenoid geometries can be created using any text editor or spread sheets and can be displayed dynamically in 3D. Fields are computed from any given list of coordinates. The field distributionmore » on the surfaces of the coils can be displayed as well. SolCalc was benchmarked against a well-known commercial software for speed and accuracy and the results compared favorably.« less
Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T
2015-04-30
New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdes, Haydee; Pluhackova, Kristyna; Hobza, Pavel
The performance of a wide range of quantum chemical calculations for the ab initio study of realistic model systems of aromatic-aromatic side chain interactions in proteins (in particular those π-π interactions occurring between adjacent residues along the protein sequence) is here assessed on the phenylalanyl-glycyl-phenylalanine (FGF) tripeptide. Energies and geometries obtained at different levels of theory are compared with CCSD(T)/CBS benchmark energies and RI-MP2/cc-pVTZ benchmark geometries, respectively. Consequently, a protocol of calculation alternative to the very expensive CCSD(T)/CBS is proposed. In addition to this, the preferred orientation of the Phe aromatic side chains is discussed and compared with previous resultsmore » on the topic.« less
The LS-STAG immersed boundary/cut-cell method for non-Newtonian flows in 3D extruded geometries
NASA Astrophysics Data System (ADS)
Nikfarjam, F.; Cheny, Y.; Botella, O.
2018-05-01
The LS-STAG method is an immersed boundary/cut-cell method for viscous incompressible flows based on the staggered MAC arrangement for Cartesian grids, where the irregular boundary is sharply represented by its level-set function, results in a significant gain in computer resources (wall time, memory usage) compared to commercial body-fitted CFD codes. The 2D version of LS-STAG method is now well-established (Cheny and Botella, 2010), and this paper presents its extension to 3D geometries with translational symmetry in the z direction (hereinafter called 3D extruded configurations). This intermediate step towards the fully 3D implementation can be applied to a wide variety of canonical flows and will be regarded as the keystone for the full 3D solver, since both discretization and implementation issues on distributed memory machines are tackled at this stage of development. The LS-STAG method is then applied to various Newtonian and non-Newtonian flows in 3D extruded geometries (axisymmetric pipe, circular cylinder, duct with an abrupt expansion) for which benchmark results and experimental data are available. The purpose of these investigations are (a) to investigate the formal order of accuracy of the LS-STAG method, (b) to assess the versatility of method for flow applications at various regimes (Newtonian and shear-thinning fluids, steady and unsteady laminar to turbulent flows) (c) to compare its performance with well-established numerical methods (body-fitted and immersed boundary methods).
Accountability for Information Flow via Explicit Formal Proof
2009-10-01
macrobenchmarks. The first (called OpenSSL in the table below), unpacks the OpenSSL source code, compiles it and deletes it. The other (called Fuse in...penalty for PCFS as compared to Fuse/Null is approximately 10% for OpenSSL , and 2.5% for Fuse. The difference arises because the OpenSSL benchmark depends...Macrobenchmarks Benchmark PCFS Fuse/Null Ext3 OpenSSL 126 114 94 Fuse x 5 79 77 70 15 In summary, assuming a low rate of cache misses, the
U.S. EPA'S ACUTE REFERENCE EXPOSURE METHODOLOGY FOR ACUTE INHALATION EXPOSURES
The US EPA National Center for Environmental Assessment has developed a methodology to derive acute inhalation toxicity benchmarks, called acute reference exposures (AREs), for noncancer effects. The methodology provides guidance for the derivation of chemical-specific benchmark...
D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances
NASA Astrophysics Data System (ADS)
Piras, M.; Di Pietra, V.; Visintini, D.
2017-08-01
The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.
High-order continuum kinetic method for modeling plasma dynamics in phase space
Vogman, G. V.; Colella, P.; Shumlak, U.
2014-12-15
Continuum methods offer a high-fidelity means of simulating plasma kinetics. While computationally intensive, these methods are advantageous because they can be cast in conservation-law form, are not susceptible to noise, and can be implemented using high-order numerical methods. Advances in continuum method capabilities for modeling kinetic phenomena in plasmas require the development of validation tools in higher dimensional phase space and an ability to handle non-cartesian geometries. To that end, a new benchmark for validating Vlasov-Poisson simulations in 3D (x,v x,v y) is presented. The benchmark is based on the Dory-Guest-Harris instability and is successfully used to validate a continuummore » finite volume algorithm. To address challenges associated with non-cartesian geometries, unique features of cylindrical phase space coordinates are described. Preliminary results of continuum kinetic simulations in 4D (r,z,v r,v z) phase space are presented.« less
Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krass, A.W.
2005-12-19
This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less
Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.
This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less
RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.
MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.
Hendriks, P H G M; Maucec, M; de Meijer, R J
2002-09-01
gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.
A benchmark study of the sea-level equation in GIA modelling
NASA Astrophysics Data System (ADS)
Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah
2017-04-01
The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-
Short-Term Field Study Programs: A Holistic and Experiential Approach to Learning
ERIC Educational Resources Information Center
Long, Mary M.; Sandler, Dennis M.; Topol, Martin T.
2017-01-01
For business schools, AACSB and Middle States' call for more experiential learning is one reason to provide study abroad programs. Universities must attend to the demand for continuous improvement and employ metrics to benchmark and evaluate their relative standing among peer institutions. One such benchmark is the National Survey of Student…
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
Radiation breakage of DNA: a model based on random-walk chromatin structure
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Sachs, R. K.
2001-01-01
Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClenaghan, J.; Lin, Z.; Holod, I.
The gyrokinetic toroidal code (GTC) capability has been extended for simulating internal kink instability with kinetic effects in toroidal geometry. The global simulation domain covers the magnetic axis, which is necessary for simulating current-driven instabilities. GTC simulation in the fluid limit of the kink modes in cylindrical geometry is verified by benchmarking with a magnetohydrodynamic eigenvalue code. Gyrokinetic simulations of the kink modes in the toroidal geometry find that ion kinetic effects significantly reduce the growth rate even when the banana orbit width is much smaller than the radial width of the perturbed current layer at the mode rational surface.
The Next Generation Heated Halo for Blackbody Emissivity Measurement
NASA Astrophysics Data System (ADS)
Gero, P.; Taylor, J. K.; Best, F. A.; Revercomb, H. E.; Knuteson, R. O.; Tobin, D. C.; Adler, D. P.; Ciganovich, N. N.; Dutcher, S. T.; Garcia, R. K.
2011-12-01
The accuracy of radiance measurements from space-based infrared spectrometers is contingent on the quality of the calibration subsystem, as well as knowledge of its uncertainty. Future climate benchmarking missions call for measurement uncertainties better than 0.1 K (k=3) in radiance temperature for the detection of spectral climate signatures. Blackbody cavities impart the most accurate calibration for spaceborne infrared sensors, provided that their temperature and emissivity is traceably determined on-orbit. The On-Orbit Absolute Radiance Standard (OARS) has been developed at the University of Wisconsin to meet the stringent requirements of the next generation of infrared remote sensing instruments. It provides on-orbit determination of both traceable temperature and emissivity for calibration blackbodies. The Heated Halo is the component of the OARS that provides a robust and compact method to measure the spectral emissivity of a blackbody in situ. A carefully baffled thermal source is placed in front of a blackbody in an infrared spectrometer system, and the combined radiance of the blackbody and Heated Halo reflection is observed. Knowledge of key temperatures and the viewing geometry allow the blackbody cavity spectral emissivity to be calculated. We present the results from the Heated Halo methodology implemented with a new Absolute Radiance Interferometer (ARI), which is a prototype space-based infrared spectrometer designed for climate benchmarking that was developed under the NASA Instrument Incubator Program (IIP). We compare our findings to models and other experimental methods of emissivity determination.
Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)
2002-01-01
The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.
NASA Astrophysics Data System (ADS)
Dorschner, B.; Chikatamarla, S. S.; Karlin, I. V.
2017-06-01
Entropic lattice Boltzmann methods have been developed to alleviate intrinsic stability issues of lattice Boltzmann models for under-resolved simulations. Its reliability in combination with moving objects was established for various laminar benchmark flows in two dimensions in our previous work [B. Dorschner, S. Chikatamarla, F. Bösch, and I. Karlin, J. Comput. Phys. 295, 340 (2015), 10.1016/j.jcp.2015.04.017] as well as for three-dimensional one-way coupled simulations of engine-type geometries in B . Dorschner, F. Bösch, S. Chikatamarla, K. Boulouchos, and I. Karlin [J. Fluid Mech. 801, 623 (2016), 10.1017/jfm.2016.448] for flat moving walls. The present contribution aims to fully exploit the advantages of entropic lattice Boltzmann models in terms of stability and accuracy and extends the methodology to three-dimensional cases, including two-way coupling between fluid and structure and then turbulence and deforming geometries. To cover this wide range of applications, the classical benchmark of a sedimenting sphere is chosen first to validate the general two-way coupling algorithm. Increasing the complexity, we subsequently consider the simulation of a plunging SD7003 airfoil in the transitional regime at a Reynolds number of Re =40 000 and, finally, to access the model's performance for deforming geometries, we conduct a two-way coupled simulation of a self-propelled anguilliform swimmer. These simulations confirm the viability of the new fluid-structure interaction lattice Boltzmann algorithm to simulate flows of engineering relevance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Jay Prakash
The objectives of this project are to calibrate the Advanced Experimental Fuel Counter (AEFC), benchmark MCNP simulations using experimental results, investigate the effects of change in fuel assembly geometry, and finally to show the boost in doubles count rates with 252Cf active soruces due to the time correlated induced fission (TCIF) effect.
Enhancement Approachof Object Constraint Language Generation
NASA Astrophysics Data System (ADS)
Salemi, Samin; Selamat, Ali
2018-01-01
OCL is the most prevalent language to document system constraints that are annotated in UML. Writing OCL specifications is not an easy task due to the complexity of the OCL syntax. Therefore, an approach to help and assist developers to write OCL specifications is needed. There are two approaches to do so: First, creating an OCL specifications by a tool called COPACABANA. Second, an MDA-based approach to help developers in writing OCL specification by another tool called NL2OCLviaSBVR that generates OCL specification automatically. This study presents another MDA-based approach called En2OCL, and its objective is twofold. 1- to improve the precison of the existing works. 2- to present a benchmark of these approaches. The benchmark shows that the accuracy of COPACABANA, NL2OCLviaSBVR, and En2OCL are 69.23, 84.64, and 88.40 respectively.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Giménez-Alventosa, V; Ballester, F; Vijande, J
2016-12-01
The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.
Influence of particle geometry and PEGylation on phagocytosis of particulate carriers.
Mathaes, Roman; Winter, Gerhard; Besheer, Ahmed; Engert, Julia
2014-04-25
Particle geometry of micro- and nanoparticles has been identified as an important design parameter to influence the interaction with cells such as macrophages. A head to head comparison of elongated, non-spherical and spherical micro- and nanoparticles with and without PEGylation was carried out to benchmark two phagocytosis inhibiting techniques. J774.A1 macrophages were incubated with fluorescently labeled PLGA micro- and nanoparticles and analyzed by confocal laser scanning microscope (CLSM) and flow cytometry (FACS). Particle uptake into macrophages was significantly reduced upon PEGylation or elongated particle geometry. A combination of both, an elongated shape and PEGylation, had the strongest phagocytosis inhibiting effect for nanoparticles. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.
Fracture Capabilities in Grizzly with the extended Finite Element Method (X-FEM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolbow, John; Zhang, Ziyu; Spencer, Benjamin
Efforts are underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). A capability was previously developed to calculate three-dimensional interaction- integrals to extract mixed-mode stress-intensity factors. This capability requires the use of a finite element mesh that conforms to the crack geometry. The eXtended Finite Element Method (X-FEM) provides a means to represent a crack geometry without explicitly fitting the finite element mesh to it. This is effected by enhancing the element kinematics to represent jump discontinuities at arbitrary locations inside ofmore » the element, as well as the incorporation of asymptotic near-tip fields to better capture crack singularities. In this work, use of only the discontinuous enrichment functions was examined to see how accurate stress intensity factors could still be calculated. This report documents the following work to enhance Grizzly’s engineering fracture capabilities by introducing arbitrary jump discontinuities for prescribed crack geometries; X-FEM Mesh Cutting in 3D: to enhance the kinematics of elements that are intersected by arbitrary crack geometries, a mesh cutting algorithm was implemented in Grizzly. The algorithm introduces new virtual nodes and creates partial elements, and then creates a new mesh connectivity; Interaction Integral Modifications: the existing code for evaluating the interaction integral in Grizzly was based on the assumption of a mesh that was fitted to the crack geometry. Modifications were made to allow for the possibility of a crack front that passes arbitrarily through the mesh; and Benchmarking for 3D Fracture: the new capabilities were benchmarked against mixed-mode three-dimensional fracture problems with known analytical solutions.« less
Status of BOUT fluid turbulence code: improvements and verification
NASA Astrophysics Data System (ADS)
Umansky, M. V.; Lodestro, L. L.; Xu, X. Q.
2006-10-01
BOUT is an electromagnetic fluid turbulence code for tokamak edge plasma [1]. BOUT performs time integration of reduced Braginskii plasma fluid equations, using spatial discretization in realistic geometry and employing a standard ODE integration package PVODE. BOUT has been applied to several tokamak experiments and in some cases calculated spectra of turbulent fluctuations compared favorably to experimental data. On the other hand, the desire to understand better the code results and to gain more confidence in it motivated investing effort in rigorous verification of BOUT. Parallel to the testing the code underwent substantial modification, mainly to improve its readability and tractability of physical terms, with some algorithmic improvements as well. In the verification process, a series of linear and nonlinear test problems was applied to BOUT, targeting different subgroups of physical terms. The tests include reproducing basic electrostatic and electromagnetic plasma modes in simplified geometry, axisymmetric benchmarks against the 2D edge code UEDGE in real divertor geometry, and neutral fluid benchmarks against the hydrodynamic code LCPFCT. After completion of the testing, the new version of the code is being applied to actual tokamak edge turbulence problems, and the results will be presented. [1] X. Q. Xu et al., Contr. Plas. Phys., 36,158 (1998). *Work performed for USDOE by Univ. Calif. LLNL under contract W-7405-ENG-48.
A suite of exercises for verifying dynamic earthquake rupture codes
Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis
2018-01-01
We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.
Benchmark of the local drift-kinetic models for neoclassical transport simulation in helical plasmas
NASA Astrophysics Data System (ADS)
Huang, B.; Satake, S.; Kanno, R.; Sugama, H.; Matsuoka, S.
2017-02-01
The benchmarks of the neoclassical transport codes based on the several local drift-kinetic models are reported here. Here, the drift-kinetic models are zero orbit width (ZOW), zero magnetic drift, DKES-like, and global, as classified in Matsuoka et al. [Phys. Plasmas 22, 072511 (2015)]. The magnetic geometries of Helically Symmetric Experiment, Large Helical Device (LHD), and Wendelstein 7-X are employed in the benchmarks. It is found that the assumption of E ×B incompressibility causes discrepancy of neoclassical radial flux and parallel flow among the models when E ×B is sufficiently large compared to the magnetic drift velocities. For example, Mp≤0.4 where Mp is the poloidal Mach number. On the other hand, when E ×B and the magnetic drift velocities are comparable, the tangential magnetic drift, which is included in both the global and ZOW models, fills the role of suppressing unphysical peaking of neoclassical radial-fluxes found in the other local models at Er≃0 . In low collisionality plasmas, in particular, the tangential drift effect works well to suppress such unphysical behavior of the radial transport caused in the simulations. It is demonstrated that the ZOW model has the advantage of mitigating the unphysical behavior in the several magnetic geometries, and that it also implements the evaluation of bootstrap current in LHD with the low computation cost compared to the global model.
Curriculum Forms: On the Assumed Shapes of Knowing and Knowledge.
ERIC Educational Resources Information Center
Davis, Brent; Sumara, Dennis J.
2000-01-01
Draws on the new field of mathematical study called fractal geometry. Illustrates the pervasiveness and constraining tendencies of classical geometries. Suggests that fractal geometry is a mathematical analogue to fields such as post-modernism, post-structuralism, and ecological theory. Examines how fractal geometry can complement other emergent…
A benchmark for subduction zone modeling
NASA Astrophysics Data System (ADS)
van Keken, P.; King, S.; Peacock, S.
2003-04-01
Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.
On the Geometry of the Hamilton-Jacobi Equation and Generating Functions
NASA Astrophysics Data System (ADS)
Ferraro, Sebastián; de León, Manuel; Marrero, Juan Carlos; Martín de Diego, David; Vaquero, Miguel
2017-10-01
In this paper we develop a geometric version of the Hamilton-Jacobi equation in the Poisson setting. Specifically, we "geometrize" what is usually called a complete solution of the Hamilton-Jacobi equation. We use some well-known results about symplectic groupoids, in particular cotangent groupoids, as a keystone for the construction of our framework. Our methodology follows the ambitious program proposed by Weinstein (In Mechanics day (Waterloo, ON, 1992), volume 7 of fields institute communications, American Mathematical Society, Providence, 1996) in order to develop geometric formulations of the dynamical behavior of Lagrangian and Hamiltonian systems on Lie algebroids and Lie groupoids. This procedure allows us to take symmetries into account, and, as a by-product, we recover results from Channell and Scovel (Phys D 50(1):80-88, 1991), Ge (Indiana Univ. Math. J. 39(3):859-876, 1990), Ge and Marsden (Phys Lett A 133(3):134-139, 1988), but even in these situations our approach is new. A theory of generating functions for the Poisson structures considered here is also developed following the same pattern, solving a longstanding problem of the area: how to obtain a generating function for the identity transformation and the nearby Poisson automorphisms of Poisson manifolds. A direct application of our results gives the construction of a family of Poisson integrators, that is, integrators that conserve the underlying Poisson geometry. These integrators are implemented in the paper in benchmark problems. Some conclusions, current and future directions of research are shown at the end of the paper.
Finite Element Modeling of the World Federation's Second MFL Benchmark Problem
NASA Astrophysics Data System (ADS)
Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita
2004-02-01
This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.
NASA Astrophysics Data System (ADS)
Jacques, Diederik
2017-04-01
As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
The Heated Halo for Space-Based Blackbody Emissivity Measurement
NASA Astrophysics Data System (ADS)
Gero, P.; Taylor, J. K.; Best, F. A.; Revercomb, H. E.; Garcia, R. K.; Adler, D. P.; Ciganovich, N. N.; Knuteson, R. O.; Tobin, D. C.
2012-12-01
The accuracy of radiance measurements with space-based infrared spectrometers is contingent on the quality of the calibration subsystem, as well as knowledge of its uncertainty. Upcoming climate benchmark missions call for measurement uncertainties better than 0.1 K (k=3) in radiance temperature for the detection of spectral climate signatures. Blackbody cavities impart the most accurate calibration for spaceborne infrared sensors, provided that their temperature and emissivity is traceably determined on-orbit. The On-Orbit Absolute Radiance Standard (OARS) has been developed at the University of Wisconsin and has undergone further refinement under the NASA Instrument Incubator Program (IIP) to meet the stringent requirements of the next generation of infrared remote sensing instruments. It provides on-orbit determination of both traceable temperature and emissivity for calibration blackbodies. The Heated Halo is the component of the OARS that provides a robust and compact method to measure the spectral emissivity of a blackbody in situ. A carefully baffled thermal source is placed in front of a blackbody in an infrared spectrometer system, and the combined radiance of the blackbody and Heated Halo reflection is observed. Knowledge of key temperatures and the viewing geometry allow the blackbody cavity spectral emissivity to be calculated. We present the results from the Heated Halo methodology implemented with a new Absolute Radiance Interferometer (ARI), which is a prototype space-based infrared spectrometer designed for climate benchmarking. We show the evolution of the technical readiness level of this technology and we compare our findings to models and other experimental methods of emissivity determination.
Benchmarks for single-phase flow in fractured porous media
NASA Astrophysics Data System (ADS)
Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru
2018-01-01
This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.
Assigned and unassigned distance geometry: applications to biological molecules and nanostructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billinge, Simon J. L.; Duxbury, Phillip M.; Gonçalves, Douglas S.
2016-04-04
Here, considering geometry based on the concept of distance, the results found by Menger and Blumenthal originated a body of knowledge called distance geometry. This survey covers some recent developments for assigned and unassigned distance geometry and focuses on two main applications: determination of three-dimensional conformations of biological molecules and nanostructures.
Visco-Resistive MHD Modeling Benchmark of Forced Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Beidler, M. T.; Hegna, C. C.; Sovinec, C. R.; Callen, J. D.; Ferraro, N. M.
2016-10-01
The presence of externally-applied 3D magnetic fields can affect important phenomena in tokamaks, including mode locking, disruptions, and edge localized modes. External fields penetrate into the plasma and can lead to forced magnetic reconnection (FMR), and hence magnetic islands, on resonant surfaces if the local plasma rotation relative to the external field is slow. Preliminary visco-resistive MHD simulations of FMR in a slab geometry are consistent with theory. Specifically, linear simulations exhibit proper scaling of the penetrated field with resistivity, viscosity, and flow, and nonlinear simulations exhibit a bifurcation from a flow-screened to a field-penetrated, magnetic island state as the external field is increased, due to the 3D electromagnetic force. These results will be compared to simulations of FMR in a circular cross-section, cylindrical geometry by way of a benchmark between the NIMROD and M3D-C1 extended-MHD codes. Because neither this geometry nor the MHD model has the physics of poloidal flow damping, the theory of will be expanded to include poloidal flow effects. The resulting theory will be tested with linear and nonlinear simulations that vary the resistivity, viscosity, flow, and external field. Supported by OFES DoE Grants DE-FG02-92ER54139, DE-FG02-86ER53218, DE-AC02-09CH11466, and the SciDAC Center for Extended MHD Modeling.
First benchmark of the Unstructured Grid Adaptation Working Group
NASA Technical Reports Server (NTRS)
Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike
2017-01-01
Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.
Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A
2015-12-08
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.
The US EPA National Center for Environmental Assessment has developed a methodology to derive acute inhalation toxicity benchmarks, called acute reference exposures (AREs), for noncancer effects. The methodology provides guidance for the derivation of chemical-specific benchmark...
Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo
2016-02-24
A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.
a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling
NASA Astrophysics Data System (ADS)
Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.
2009-03-01
Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.
TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.
2014-06-01
The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.
MIFT: GIFT Combinatorial Geometry Input to VCS Code
1977-03-01
r-w w-^ H ^ß0318is CQ BRL °RCUMr REPORT NO. 1967 —-S: ... MIFT: GIFT COMBINATORIAL GEOMETRY INPUT TO VCS CODE Albert E...TITLE (and Subtitle) MIFT: GIFT Combinatorial Geometry Input to VCS Code S. TYPE OF REPORT & PERIOD COVERED FINAL 6. PERFORMING ORG. REPORT NUMBER...Vehicle Code System (VCS) called MORSE was modified to accept the GIFT combinatorial geometry package. GIFT , as opposed to the geometry package
26 CFR 1.1092(c)-1 - Qualified covered calls.
Code of Federal Regulations, 2010 CFR
2010-04-01
... lowest qualified benchmark is determined using the adjusted applicable stock price, as defined in § 1... (CONTINUED) INCOME TAXES Wash Sales of Stock Or Securities § 1.1092(c)-1 Qualified covered calls. (a) In.... Under section 1092(d)(3)(B)(i)(I), stock is personal property if the stock is part of a straddle that...
Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems
NASA Technical Reports Server (NTRS)
Dahl, Milo D. (Editor)
2004-01-01
This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
Algorithm and Architecture Independent Benchmarking with SEAK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.
2016-05-23
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
Neutron skyshine calculations with the integral line-beam method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-10-01
Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.
NASA Astrophysics Data System (ADS)
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
Benchmarking Data for the Proposed Signature of Used Fuel Casks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Eric Benton
2016-09-23
A set of benchmarking measurements to test facets of the proposed extended storage signature was conducted on May 17, 2016. The measurements were designed to test the overall concept of how the proposed signature can be used to identify a used fuel cask based only on the distribution of neutron sources within the cask. To simulate the distribution, 4 Cf-252 sources were chosen and arranged on a 3x3 grid in 3 different patterns and raw neutron totals counts were taken at 6 locations around the grid. This is a very simplified test of the typical geometry studied previously in simulationmore » with simulated used nuclear fuel.« less
Transient and steady state viscoelastic rolling contact
NASA Technical Reports Server (NTRS)
Padovan, J.; Paramadilok, O.
1985-01-01
Based on moving total Lagrangian coordinates, a so-called traveling Hughes type contact strategy is developed. Employing the modified contact scheme in conjunction with a traveling finite element strategy, an overall solution methodology is developed to handle transient and steady viscoelastic rolling contact. To verify the scheme, the results of both experimental and analytical benchmarking is presented. The experimental benchmarking includes the handling of rolling tires up to their upper bound behavior, namely the standing wave response.
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
Do fungi need to be included within environmental radiation protection assessment models?
Guillén, J; Baeza, A; Beresford, N A; Wood, M D
2017-09-01
Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Sub-Doppler Rovibrational Spectroscopy of the H_3^+ Cation and Isotopologues
NASA Astrophysics Data System (ADS)
Markus, Charles R.; McCollum, Jefferson E.; Dieter, Thomas S.; Kocheril, Philip A.; McCall, Benjamin J.
2017-06-01
Molecular ions play a central role in the chemistry of the interstellar medium (ISM) and act as benchmarks for state of the art ab initio theory. The molecular ion H_3^+ initiates a chain of ion-neutral reactions which drives chemistry in the ISM, and observing it either directly or indirectly through its isotopologues is valuable for understanding interstellar chemistry. Improving the accuracy of laboratory measurements will assist future astronomical observations. H_3^+ is also one of a few systems whose rovibrational transitions can be predicted to spectroscopic accuracy (<1 cm^{-1}), and with careful treatment of adiabatic, nonadiabatic, and quantum electrodynamic corrections to the potential energy surface, predictions of low lying rovibrational states can rival the uncertainty of experimental measurements New experimental data will be needed to benchmark future treatment of these corrections. Previously we have reported 26 transitions within the fundamental band of H_3^+ with MHz-level uncertainties. With recent improvements to our overall sensitivity, we have expanded this survey to include additional transitions within the fundamental band and the first hot band. These new data will ultimately be used to predict ground state rovibrational energy levels through combination differences which will act as benchmarks for ab initio theory and predict forbidden rotational transitions of H_3^+. We will also discuss progress in measuring rovibrational transitions of the isotopologues H_2D^+ and D_2H^+, which will be used to assist in future THz astronomical observations. New experimental data will be needed to benchmark future treatment of these corrections. J. N. Hodges, A. J. Perry, P. A. Jenkins II, B. M. Siller, and B. J. McCall, J. Chem. Phys. (2013), 139, 164201. A. J. Perry, J. N. Hodges, C. R. Markus, G. S. Kocheril, and B. J. McCall, J. Mol. Spectrosc. (2015), 317, 71-73. A. J. Perry, C. R. Markus, J. N. Hodges, G. S. Kocheril, and B. J. McCall, 71st International Symposium on Molecular Spectroscopy (2016), MH03. C. R. Markus, A. J. Perry, J. N. Hodges, and B. J. McCall, Opt. Express (2017), 25, 3709-3721.
NASA Technical Reports Server (NTRS)
James, John T.; Lam, Chiu-wing; Scully, Robert R.
2013-01-01
Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.
To participate or not in the physician quality reporting initiative (PQRI); that is the question.
Elliott, Brett
2007-05-01
The Tax Relief and Health Care Act of 2006 authorized the establishment of a physician quality reporting system which would tie a reimbursement incentive to compliance with benchmarks that are considered proxies of quality patient care. The Centers for Medicare and Medicare Services (CMS) has called this the Physician Quality Reporting Initiative (PQRI). A brief historical background about how this program evolved, how one participates in this initiative, and the strengths and weaknesses of current and new benchmarks is presented.
Brorsen, Kurt R; Yang, Yang; Hammes-Schiffer, Sharon
2017-08-03
Nuclear quantum effects such as zero point energy play a critical role in computational chemistry and often are included as energetic corrections following geometry optimizations. The nuclear-electronic orbital (NEO) multicomponent density functional theory (DFT) method treats select nuclei, typically protons, quantum mechanically on the same level as the electrons. Electron-proton correlation is highly significant, and inadequate treatments lead to highly overlocalized nuclear densities. A recently developed electron-proton correlation functional, epc17, has been shown to provide accurate nuclear densities for molecular systems. Herein, the NEO-DFT/epc17 method is used to compute the proton affinities for a set of molecules and to examine the role of nuclear quantum effects on the equilibrium geometry of FHF - . The agreement of the computed results with experimental and benchmark values demonstrates the promise of this approach for including nuclear quantum effects in calculations of proton affinities, pK a 's, optimized geometries, and reaction paths.
NASA Astrophysics Data System (ADS)
Zou, Z.; Scott, M. A.; Borden, M. J.; Thomas, D. C.; Dornisch, W.; Brivadis, E.
2018-05-01
In this paper we develop the isogeometric B\\'ezier dual mortar method. It is based on B\\'ezier extraction and projection and is applicable to any spline space which can be represented in B\\'ezier form (i.e., NURBS, T-splines, LR-splines, etc.). The approach weakly enforces the continuity of the solution at patch interfaces and the error can be adaptively controlled by leveraging the refineability of the underlying dual spline basis without introducing any additional degrees of freedom. We also develop weakly continuous geometry as a particular application of isogeometric B\\'ezier dual mortaring. Weakly continuous geometry is a geometry description where the weak continuity constraints are built into properly modified B\\'ezier extraction operators. As a result, multi-patch models can be processed in a solver directly without having to employ a mortaring solution strategy. We demonstrate the utility of the approach on several challenging benchmark problems. Keywords: Mortar methods, Isogeometric analysis, B\\'ezier extraction, B\\'ezier projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo; Sterpin, Edmond
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithmmore » of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.« less
Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.
2010-01-01
This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.
Nonlinear 3D visco-resistive MHD modeling of fusion plasmas: a comparison between numerical codes
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Chacon, L.; Cappello, S.
2008-11-01
Fluid plasma models (and, in particular, the MHD model) are extensively used in the theoretical description of laboratory and astrophysical plasmas. We present here a successful benchmark between two nonlinear, three-dimensional, compressible visco-resistive MHD codes. One is the fully implicit, finite volume code PIXIE3D [1,2], which is characterized by many attractive features, notably the generalized curvilinear formulation (which makes the code applicable to different geometries) and the possibility to include in the computation the energy transport equation and the extended MHD version of Ohm's law. In addition, the parallel version of the code features excellent scalability properties. Results from this code, obtained in cylindrical geometry, are compared with those produced by the semi-implicit cylindrical code SpeCyl, which uses finite differences radially, and spectral formulation in the other coordinates [3]. Both single and multi-mode simulations are benchmarked, regarding both reversed field pinch (RFP) and ohmic tokamak magnetic configurations. [1] L. Chacon, Computer Physics Communications 163, 143 (2004). [2] L. Chacon, Phys. Plasmas 15, 056103 (2008). [3] S. Cappello, Plasma Phys. Control. Fusion 46, B313 (2004) & references therein.
Benchmark for Numerical Models of Stented Coronary Bifurcation Flow.
García Carrascal, P; García García, J; Sierra Pallares, J; Castro Ruiz, F; Manuel Martín, F J
2018-09-01
In-stent restenosis ails many patients who have undergone stenting. When the stented artery is a bifurcation, the intervention is particularly critical because of the complex stent geometry involved in these structures. Computational fluid dynamics (CFD) has been shown to be an effective approach when modeling blood flow behavior and understanding the mechanisms that underlie in-stent restenosis. However, these CFD models require validation through experimental data in order to be reliable. It is with this purpose in mind that we performed particle image velocimetry (PIV) measurements of velocity fields within flows through a simplified coronary bifurcation. Although the flow in this simplified bifurcation differs from the actual blood flow, it emulates the main fluid dynamic mechanisms found in hemodynamic flow. Experimental measurements were performed for several stenting techniques in both steady and unsteady flow conditions. The test conditions were strictly controlled, and uncertainty was accurately predicted. The results obtained in this research represent readily accessible, easy to emulate, detailed velocity fields and geometry, and they have been successfully used to validate our numerical model. These data can be used as a benchmark for further development of numerical CFD modeling in terms of comparison of the main flow pattern characteristics.
Benchmarked analyses of gamma skyshine using MORSE-CGA-PC and the DABL69 cross-section set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichert, P.T.; Golshani, M.
1991-01-01
Design for gamma-ray skyshine is a common consideration for a variety of nuclear and accelerator facilities. Many of these designs can benefit from a more accurate and complete treatment than can be provided by simple skyshine analysis tools. Those methods typically require a number of conservative, simplifying assumptions in modeling the radiation source and shielding geometry. This paper considers the benchmarking of one analytical option. The MORSE-CGA Monte Carlo radiation transport code system provides the capability for detailed treatment of virtually any source and shielding geometry. Unfortunately, the mainframe computer costs of MORSE-CGA analyses can prevent cost-effective application to smallmore » projects. For this reason, the MORSE-CGA system was converted to run on IBM personal computer (PC)-compatible computers using the Intel 80386 or 80486 microprocessors. The DLC-130/DABL69 cross-section set (46n,23g) was chosen as the most suitable, readily available, broad-group library. The most important reason is the relatively high (P{sub 5}) Legendre order of expansion for angular distribution. This is likely to be beneficial in the deep-penetration conditions modeled in some skyshine problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munro, J.F.; Kristal, J.; Thompson, G.
The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis.more » One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.« less
Designing "Geometry 2.0" Learning Environments: A Preliminary Study with Primary School Students
ERIC Educational Resources Information Center
Prieto, Nuria Joglar; Sordo Juanena, José María; Star, Jon R.
2014-01-01
The information and communication technologies of Web 2.0 are arriving in our schools, allowing the design and implementation of new learning environments with great educational potential. This article proposes a pedagogical model based on a new geometry technology-integrated learning environment, called "Geometry 2.0," which was tested…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
Comparison of VRX CT scanners geometries
NASA Astrophysics Data System (ADS)
DiBianca, Frank A.; Melnyk, Roman; Duckworth, Christopher N.; Russ, Stephan; Jordan, Lawrence M.; Laughter, Joseph S.
2001-06-01
A technique called Variable-Resolution X-ray (VRX) detection greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) as the field size decreases. The technique is based on a principle called `projective compression' that allows both the resolution element and the sampling distance of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. This paper compares the benefits obtainable with two different VRX detector geometries: the single-arm geometry and the dual-arm geometry. The analysis is based on Monte Carlo simulations and direct calculations. The results of this study indicate that the dual-arm system appears to have more advantages than the single-arm technique.
Parameter regimes for a single sequential quantum repeater
NASA Astrophysics Data System (ADS)
Rozpędek, F.; Goodenough, K.; Ribeiro, J.; Kalb, N.; Caprara Vivoli, V.; Reiserer, A.; Hanson, R.; Wehner, S.; Elkouss, D.
2018-07-01
Quantum key distribution allows for the generation of a secret key between distant parties connected by a quantum channel such as optical fibre or free space. Unfortunately, the rate of generation of a secret key by direct transmission is fundamentally limited by the distance. This limit can be overcome by the implementation of so-called quantum repeaters. Here, we assess the performance of a specific but very natural setup called a single sequential repeater for quantum key distribution. We offer a fine-grained assessment of the repeater by introducing a series of benchmarks. The benchmarks, which should be surpassed to claim a working repeater, are based on finite-energy considerations, thermal noise and the losses in the setup. In order to boost the performance of the studied repeaters we introduce two methods. The first one corresponds to the concept of a cut-off, which reduces the effect of decoherence during the storage of a quantum state by introducing a maximum storage time. Secondly, we supplement the standard classical post-processing with an advantage distillation procedure. Using these methods, we find realistic parameters for which it is possible to achieve rates greater than each of the benchmarks, guiding the way towards implementing quantum repeaters.
NASA Technical Reports Server (NTRS)
Lockard, David P.
2011-01-01
Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.
A soft damping function for dispersion corrections with less overfitting
NASA Astrophysics Data System (ADS)
Ucak, Umit V.; Ji, Hyunjun; Singh, Yashpal; Jung, Yousung
2016-11-01
The use of damping functions in empirical dispersion correction schemes is common and widespread. These damping functions contain scaling and damping parameters, and they are usually optimized for the best performance in practical systems. In this study, it is shown that the overfitting problem can be present in current damping functions, which can sometimes yield erroneous results for real applications beyond the nature of training sets. To this end, we present a damping function called linear soft damping (lsd) that suffers less from this overfitting. This linear damping function damps the asymptotic curve more softly than existing damping functions, attempting to minimize the usual overcorrection. The performance of the proposed damping function was tested with benchmark sets for thermochemistry, reaction energies, and intramolecular interactions, as well as intermolecular interactions including nonequilibrium geometries. For noncovalent interactions, all three damping schemes considered in this study (lsd, lg, and BJ) roughly perform comparably (approximately within 1 kcal/mol), but for atomization energies, lsd clearly exhibits a better performance (up to 2-6 kcal/mol) compared to other schemes due to an overfitting in lg and BJ. The number of unphysical parameters resulting from global optimization also supports the overfitting symptoms shown in the latter numerical tests.
A 1D-2D coupled SPH-SWE model applied to open channel flow simulations in complicated geometries
NASA Astrophysics Data System (ADS)
Chang, Kao-Hua; Sheu, Tony Wen-Hann; Chang, Tsang-Jung
2018-05-01
In this study, a one- and two-dimensional (1D-2D) coupled model is developed to solve the shallow water equations (SWEs). The solutions are obtained using a Lagrangian meshless method called smoothed particle hydrodynamics (SPH) to simulate shallow water flows in converging, diverging and curved channels. A buffer zone is introduced to exchange information between the 1D and 2D SPH-SWE models. Interpolated water discharge values and water surface levels at the internal boundaries are prescribed as the inflow/outflow boundary conditions in the two SPH-SWE models. In addition, instead of using the SPH summation operator, we directly solve the continuity equation by introducing a diffusive term to suppress oscillations in the predicted water depth. The performance of the two approaches in calculating the water depth is comprehensively compared through a case study of a straight channel. Additionally, three benchmark cases involving converging, diverging and curved channels are adopted to demonstrate the ability of the proposed 1D and 2D coupled SPH-SWE model through comparisons with measured data and predicted mesh-based numerical results. The proposed model provides satisfactory accuracy and guaranteed convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brady, M; Browand, F; Flowers, D
A Working Group Meeting on Heavy Vehicle Aerodynamic Drag was held at University of Southern California, Los Angeles, California on July 30, 1999. The purpose of the meeting was to present technical details on the experimental and computational plans and approaches and provide an update on progress in obtaining experimental results, model developments, and simulations. The focus of the meeting was a review of University of Southern California's (USC) experimental plans and results and the computational results from Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL) for the integrated tractor-trailer benchmark geometry called the Sandia Model. Much ofmore » the meeting discussion involved the NASA Ames 7 ft x 10 ft wind tunnel tests and the need for documentation of the results. The present and projected budget and funding situation was also discussed. Presentations were given by representatives from the Department of Energy (DOE) Office of Transportation Technology Office of Heavy Vehicle Technology (OHVT), LLNL, SNL, USC, and California Institute of Technology (Caltech). This report contains the technical presentations (viewgraphs) delivered at the Meeting, briefly summarizes the comments and conclusions, and outlines the future action items.« less
Geometry and experience: Einstein's 1921 paper and Hilbert's axiomatic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Gandt, Francois
2006-06-19
In his 1921 paper Geometrie und Erfahrung, Einstein decribes the new epistemological status of geometry, divorced from any intuitive or a priori content. He calls that 'axiomatics', following Hilbert's theoretical developments on axiomatic systems, which started with the stimulus given by a talk by Hermann Wiener in 1891 and progressed until the Foundations of geometry in 1899. Difficult questions arise: how is a theoretical system related to an intuitive empirical content?.
Meier, Matthias; Jakub, Zdeněk; Balajka, Jan; Hulva, Jan; Bliem, Roland; Thakur, Pardeep K.; Lee, Tien-Lin; Franchini, Cesare; Schmid, Michael; Diebold, Ulrike; Allegretti, Francesco; Parkinson, Gareth S.
2018-01-01
Accurately modelling the structure of a catalyst is a fundamental prerequisite for correctly predicting reaction pathways, but a lack of clear experimental benchmarks makes it difficult to determine the optimal theoretical approach. Here, we utilize the normal incidence X-ray standing wave (NIXSW) technique to precisely determine the three dimensional geometry of Ag1 and Cu1 adatoms on Fe3O4(001). Both adatoms occupy bulk-continuation cation sites, but with a markedly different height above the surface (0.43 ± 0.03 Å (Cu1) and 0.96 ± 0.03 Å (Ag1)). HSE-based calculations accurately predict the experimental geometry, but the more common PBE + U and PBEsol + U approaches perform poorly. PMID:29334395
Fast immersed interface Poisson solver for 3D unbounded problems around arbitrary geometries
NASA Astrophysics Data System (ADS)
Gillis, T.; Winckelmans, G.; Chatelain, P.
2018-02-01
We present a fast and efficient Fourier-based solver for the Poisson problem around an arbitrary geometry in an unbounded 3D domain. This solver merges two rewarding approaches, the lattice Green's function method and the immersed interface method, using the Sherman-Morrison-Woodbury decomposition formula. The method is intended to be second order up to the boundary. This is verified on two potential flow benchmarks. We also further analyse the iterative process and the convergence behavior of the proposed algorithm. The method is applicable to a wide range of problems involving a Poisson equation around inner bodies, which goes well beyond the present validation on potential flows.
Verification of a magnetic island in gyro-kinetics by comparison with analytic theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarzoso, D., E-mail: david.zarzoso-fernandez@polytechnique.org; Casson, F. J.; Poli, E.
A rotating magnetic island is imposed in the gyrokinetic code GKW, when finite differences are used for the radial direction, in order to develop the predictions of analytic tearing mode theory and understand its limitations. The implementation is verified against analytics in sheared slab geometry with three numerical tests that are suggested as benchmark cases for every code that imposes a magnetic island. The convergence requirements to properly resolve physics around the island separatrix are investigated. In the slab geometry, at low magnetic shear, binormal flows inside the island can drive Kelvin-Helmholtz instabilities which prevent the formation of the steadymore » state for which the analytic theory is formulated.« less
Structural weights analysis of advanced aerospace vehicles using finite element analysis
NASA Technical Reports Server (NTRS)
Bush, Lance B.; Lentz, Christopher A.; Rehder, John J.; Naftel, J. Chris; Cerro, Jeffrey A.
1989-01-01
A conceptual/preliminary level structural design system has been developed for structural integrity analysis and weight estimation of advanced space transportation vehicles. The system includes a three-dimensional interactive geometry modeler, a finite element pre- and post-processor, a finite element analyzer, and a structural sizing program. Inputs to the system include the geometry, surface temperature, material constants, construction methods, and aerodynamic and inertial loads. The results are a sized vehicle structure capable of withstanding the static loads incurred during assembly, transportation, operations, and missions, and a corresponding structural weight. An analysis of the Space Shuttle external tank is included in this paper as a validation and benchmark case of the system.
ERIC Educational Resources Information Center
Santos-Trigo, Manuel
2004-01-01
A dynamic program for geometry called Cabri Geometry II is used to examine properties of figures like triangles and make connections with other mathematical ideas like ellipse. The technology tip includes directions for creating such a problem with technology and suggestions for exploring it.
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
The Suite for Embedded Applications and Kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-05-10
Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less
NASA Astrophysics Data System (ADS)
Bird, M. B.; Butler, S. L.; Hawkes, C. D.; Kotzer, T.
2014-12-01
The use of numerical simulations to model physical processes occurring within subvolumes of rock samples that have been characterized using advanced 3D imaging techniques is becoming increasingly common. Not only do these simulations allow for the determination of macroscopic properties like hydraulic permeability and electrical formation factor, but they also allow the user to visualize processes taking place at the pore scale and they allow for multiple different processes to be simulated on the same geometry. Most efforts to date have used specialized research software for the purpose of simulations. In this contribution, we outline the steps taken to use commercial software Avizo to transform a 3D synchrotron X-ray-derived tomographic image of a rock core sample to an STL (STereoLithography) file which can be imported into the commercial multiphysics modeling package COMSOL. We demonstrate that the use of COMSOL to perform fluid and electrical current flow simulations through the pore spaces. The permeability and electrical formation factor of the sample are calculated and compared with laboratory-derived values and benchmark calculations. Although the simulation domains that we were able to model on a desk top computer were significantly smaller than representative elementary volumes, and we were able to establish Kozeny-Carman and Archie's Law trends on which laboratory measurements and previous benchmark solutions fall. The rock core samples include a Fountainebleau sandstone used for benchmarking and a marly dolostone sampled from a well in the Weyburn oil field of southeastern Saskatchewan, Canada. Such carbonates are known to have complicated pore structures compared with sandstones, yet we are able to calculate reasonable macroscopic properties. We discuss the computing resources required.
Reducing calls missed by the hospital telephone exchange from 26% to less than 10.
Bhartia, Saru; Bahlvi, Zorba; Sharma, Irina
2016-01-01
A hospital's telephone exchange is the first point of contact for patients and their attendants to take appointments, to collect healthcare related information and to connect to the hospital in case of emergencies. At Sitaram Bhartia Institute of Science and Research the doctors, patients, and attendants often complained about the inefficiency of the hospital exchange. In February 2012, a doctor raised her concern of calls not being picked up at the exchange with the senior management and a QI project was initiated to tackle the problem. Baseline data showed that about 26% of incoming calls to the hospital during 8am to 8pm were not being picked up. On the basis of the baseline data, call audits, staff interviews, and observations the project team identified the defects. These defects were categorized under four headings - manpower, equipment, processes, and environment. The team proposed several change ideas. Some of these change ideas were implemented immediately. Three proposed change ideas were tested through individual PDSA cycles. The percentage of missed calls dropped from 26% to 18.1% after the first cycle and then to 9.6% and 6.5% after the subsequent cycles which involved testing of two other additional change ideas. These changes were implemented and a benchmark of no more than 10% calls to be missed was set. For nearly three years we have held the gains and have met the benchmark of missing not more than 10% calls coming to the hospital exchange between 8am to 8pm. The contributing factors to the success have been the involvement of frontline workers, an expert and engaged head of department, and senior leadership support.
NASA Astrophysics Data System (ADS)
Schneider, E. A.; Deinert, M. R.; Cady, K. B.
2006-10-01
The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.
Benchmarking Commercial Conformer Ensemble Generators.
Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes
2017-11-27
We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2016-03-01
One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.
Verification of a neutronic code for transient analysis in reactors with Hex-z geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez-Pintor, S.; Verdu, G.; Ginestar, D.
Due to the geometry of the fuel bundles, to simulate reactors such as VVER reactors it is necessary to develop methods that can deal with hexagonal prisms as basic elements of the spatial discretization. The main features of a code based on a high order finite element method for the spatial discretization of the neutron diffusion equation and an implicit difference method for the time discretization of this equation are presented and the performance of the code is tested solving the first exercise of the AER transient benchmark. The obtained results are compared with the reference results of the benchmarkmore » and with the results provided by PARCS code. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; Schaefer, R. W.; McKnight, R. D.
Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less
Radiation Detection Computational Benchmark Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less
ERIC Educational Resources Information Center
Martin, John
2010-01-01
The cycloid has been called the Helen of Geometry, not only because of its beautiful properties but also because of the quarrels it provoked between famous mathematicians of the 17th century. This article surveys the history of the cycloid and its importance in the development of the calculus.
Muver, a computational framework for accurately calling accumulated mutations.
Burkholder, Adam B; Lujan, Scott A; Lavender, Christopher A; Grimm, Sara A; Kunkel, Thomas A; Fargo, David C
2018-05-09
Identification of mutations from next-generation sequencing data typically requires a balance between sensitivity and accuracy. This is particularly true of DNA insertions and deletions (indels), that can impart significant phenotypic consequences on cells but are harder to call than substitution mutations from whole genome mutation accumulation experiments. To overcome these difficulties, we present muver, a computational framework that integrates established bioinformatics tools with novel analytical methods to generate mutation calls with the extremely low false positive rates and high sensitivity required for accurate mutation rate determination and comparison. Muver uses statistical comparison of ancestral and descendant allelic frequencies to identify variant loci and assigns genotypes with models that include per-sample assessments of sequencing errors by mutation type and repeat context. Muver identifies maximally parsimonious mutation pathways that connect these genotypes, differentiating potential allelic conversion events and delineating ambiguities in mutation location, type, and size. Benchmarking with a human gold standard father-son pair demonstrates muver's sensitivity and low false positive rates. In DNA mismatch repair (MMR) deficient Saccharomyces cerevisiae, muver detects multi-base deletions in homopolymers longer than the replicative polymerase footprint at rates greater than predicted for sequential single-base deletions, implying a novel multi-repeat-unit slippage mechanism. Benchmarking results demonstrate the high accuracy and sensitivity achieved with muver, particularly for indels, relative to available tools. Applied to an MMR-deficient Saccharomyces cerevisiae system, muver mutation calls facilitate mechanistic insights into DNA replication fidelity.
Accurate reconstruction of 3D cardiac geometry from coarsely-sliced MRI.
Ringenberg, Jordan; Deo, Makarand; Devabhaktuni, Vijay; Berenfeld, Omer; Snyder, Brett; Boyers, Pamela; Gold, Jeffrey
2014-02-01
We present a comprehensive validation analysis to assess the geometric impact of using coarsely-sliced short-axis images to reconstruct patient-specific cardiac geometry. The methods utilize high-resolution diffusion tensor MRI (DTMRI) datasets as reference geometries from which synthesized coarsely-sliced datasets simulating in vivo MRI were produced. 3D models are reconstructed from the coarse data using variational implicit surfaces through a commonly used modeling tool, CardioViz3D. The resulting geometries were then compared to the reference DTMRI models from which they were derived to analyze how well the synthesized geometries approximate the reference anatomy. Averaged over seven hearts, 95% spatial overlap, less than 3% volume variability, and normal-to-surface distance of 0.32 mm was observed between the synthesized myocardial geometries reconstructed from 8 mm sliced images and the reference data. The results provide strong supportive evidence to validate the hypothesis that coarsely-sliced MRI may be used to accurately reconstruct geometric ventricular models. Furthermore, the use of DTMRI for validation of in vivo MRI presents a novel benchmark procedure for studies which aim to substantiate their modeling and simulation methods using coarsely-sliced cardiac data. In addition, the paper outlines a suggested original procedure for deriving image-based ventricular models using the CardioViz3D software. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Use of Rubrics in Benchmarking and Assessing Employability Skills
ERIC Educational Resources Information Center
Riebe, Linda; Jackson, Denise
2014-01-01
Calls for employability skill development in undergraduates now extend across many culturally similar developed economies. Government initiatives, industry professional accreditation criteria, and the development of academic teaching and learning standards increasingly drive the employability agenda, further cementing the need for skill…
CFD-Based Design of Turbopump Inlet Duct for Reduced Dynamic Loads
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Dorney, Suzanne M.; Dorney, Daniel J.
2003-01-01
Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow codes used in this study are applicable to these incompressible flow simulations.
CFD-based Design of LOX Pump Inlet Duct for Reduced Dynamic Loads
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Dorney, Daniel J.; Dorney, Suzanne M.
2003-01-01
Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow code used in this study is applicable to these incompressible flow simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less
Warped conformal field theory as lower spin gravity
NASA Astrophysics Data System (ADS)
Hofman, Diego M.; Rollier, Blaise
2015-08-01
Two dimensional Warped Conformal Field Theories (WCFTs) may represent the simplest examples of field theories without Lorentz invariance that can be described holographically. As such they constitute a natural window into holography in non-AdS space-times, including the near horizon geometry of generic extremal black holes. It is shown in this paper that WCFTs posses a type of boost symmetry. Using this insight, we discuss how to couple these theories to background geometry. This geometry is not Riemannian. We call it Warped Geometry and it turns out to be a variant of a Newton-Cartan structure with additional scaling symmetries. With this formalism the equivalent of Weyl invariance in these theories is presented and we write two explicit examples of WCFTs. These are free fermionic theories. Lastly we present a systematic description of the holographic duals of WCFTs. It is argued that the minimal setup is not Einstein gravity but an SL (2, R) × U (1) Chern-Simons Theory, which we call Lower Spin Gravity. This point of view makes manifest the definition of boundary for these non-AdS geometries. This case represents the first step towards understanding a fully invariant formalism for WN field theories and their holographic duals.
A symmetry measure for damage detection with mode shapes
NASA Astrophysics Data System (ADS)
Chen, Justin G.; Büyüköztürk, Oral
2017-11-01
This paper introduces a feature for detecting damage or changes in structures, the continuous symmetry measure, which can quantify the amount of a particular rotational, mirror, or translational symmetry in a mode shape of a structure. Many structures in the built environment have geometries that are either symmetric or almost symmetric, however damage typically occurs in a local manner causing asymmetric changes in the structure's geometry or material properties, and alters its mode shapes. The continuous symmetry measure can quantify these changes in symmetry as a novel indicator of damage for data-based structural health monitoring approaches. This paper describes the concept as a basis for detecting changes in mode shapes and detecting structural damage. Application of the method is demonstrated in various structures with different symmetrical properties: a pipe cross-section with a finite element model and experimental study, the NASA 8-bay truss model, and the simulated IASC-ASCE structural health monitoring benchmark structure. The applicability and limitations of the feature in applying it to structures of varying geometries is discussed.
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.
2017-02-01
This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.
Mental Rotation, Pictured Rotation, and Tandem Rotation in Depth
1997-01-01
field. Such an explanation by natural geometry conflates visual comparison with physical measurement. This application of geometry is called natural in...the theory of vision parasitic on geometry: it is unclear what could be meant by a ’mental operation of rotation’, except by reference to physical ...operation, a mental analogue of the physical operation of rotation in space. Since then the story of mental rotation has become far more complicated
ERIC Educational Resources Information Center
Casa, Tutita M.; Firmender, Janine M.; Gavin, M. Katherine; Carroll, Susan R.
2017-01-01
This research responds to the call by early childhood educators advocating for more challenging mathematics curriculum at the primary level. The kindergarten Project M[superscript 2] units focus on challenging geometry and measurement concepts by positioning students as practicing mathematicians. The research reported herein highlights the…
NASA Astrophysics Data System (ADS)
La Tessa, Chiara; Mancusi, Davide; Rinaldi, Adele; di Fino, Luca; Zaconte, Veronica; Larosa, Marianna; Narici, Livio; Gustafsson, Katarina; Sihver, Lembit
ALTEA-Space is the principal in-space experiment of an international and multidisciplinary project called ALTEA (Anomalus Long Term Effects on Astronauts). The measurements were performed on the International Space Station between August 2006 and July 2007 and aimed at characterising the space radiation environment inside the station. The analysis of the collected data provided the abundances of elements with charge 5 ≤ Z ≤ 26 and energy above 100 MeV/nucleon. The same results have been obtained by simulating the experiment with the three-dimensional Monte Carlo code PHITS (Particle and Heavy Ion Transport System). The simulation reproduces accurately the composition of the space radiation environment as well as the geometry of the experimental apparatus; moreover the presence of several materials, e.g. the spacecraft hull and the shielding, that surround the device has been taken into account. An estimate of the abundances has also been calculated with the help of experimental fragmentation cross sections taken from literature and predictions of the deterministic codes GNAC, SihverCC and Tripathi97. The comparison between the experimental and simulated data has two important aspects: it validates the codes giving possible hints how to benchmark them; it helps to interpret the measurements and therefore have a better understanding of the results.
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
Energy design for protein-protein interactions
Ravikant, D. V. S.; Elber, Ron
2011-01-01
Proteins bind to other proteins efficiently and specifically to carry on many cell functions such as signaling, activation, transport, enzymatic reactions, and more. To determine the geometry and strength of binding of a protein pair, an energy function is required. An algorithm to design an optimal energy function, based on empirical data of protein complexes, is proposed and applied. Emphasis is made on negative design in which incorrect geometries are presented to the algorithm that learns to avoid them. For the docking problem the search for plausible geometries can be performed exhaustively. The possible geometries of the complex are generated on a grid with the help of a fast Fourier transform algorithm. A novel formulation of negative design makes it possible to investigate iteratively hundreds of millions of negative examples while monotonically improving the quality of the potential. Experimental structures for 640 protein complexes are used to generate positive and negative examples for learning parameters. The algorithm designed in this work finds the correct binding structure as the lowest energy minimum in 318 cases of the 640 examples. Further benchmarks on independent sets confirm the significant capacity of the scoring function to recognize correct modes of interactions. PMID:21842951
Benchmark study for charge deposition by high energy electrons in thick slabs
NASA Technical Reports Server (NTRS)
Jun, I.
2002-01-01
The charge deposition profiles created when highenergy (1, 10, and 100 MeV) electrons impinge ona thick slab of elemental aluminum, copper, andtungsten are presented in this paper. The chargedeposition profiles were computed using existing representative Monte Carlo codes: TIGER3.0 (1D module of ITS3.0) and MCNP version 4B. The results showed that TIGER3.0 and MCNP4B agree very well (within 20% of each other) in the majority of the problem geometry. The TIGER results were considered to be accurate based on previous studies. Thus, it was demonstrated that MCNP, with its powerful geometry capability and flexible source and tally options, could be used in calculations of electron charging in high energy electron-rich space radiation environments.
A Flow Solver for Three-Dimensional DRAGON Grids
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Zheng, Yao
2002-01-01
DRAGONFLOW code has been developed to solve three-dimensional Navier-Stokes equations over a complex geometry whose flow domain is discretized with the DRAGON grid-a combination of Chimera grid and a collection of unstructured grids. In the DRAGONFLOW suite, both OVERFLOW and USM3D are presented in form of module libraries, and a master module controls the invoking of these individual modules. This report includes essential aspects, programming structures, benchmark tests and numerical simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, W.
2012-07-01
Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Lam, Adam; Brantley, Patrick; Malvagi, Fausto; Palmer, Todd; Zoia, Andrea
2018-01-01
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore-Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using this algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
Larmier, Coline; Lam, Adam; Brantley, Patrick; ...
2017-09-27
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmier, Coline; Lam, Adam; Brantley, Patrick
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Pan Air Geometry Management System (PAGMS): A data-base management system for PAN AIR geometry data
NASA Technical Reports Server (NTRS)
Hall, J. F.
1981-01-01
A data-base management system called PAGMS was developed to facilitate the data transfer in applications computer programs that create, modify, plot or otherwise manipulate PAN AIR type geometry data in preparation for input to the PAN AIR system of computer programs. PAGMS is composed of a series of FORTRAN callable subroutines which can be accessed directly from applications programs. Currently only a NOS version of PAGMS has been developed.
Arithmetic Data Cube as a Data Intensive Benchmark
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabano, Leonid
2003-01-01
Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.
A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics
NASA Astrophysics Data System (ADS)
Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger
2017-09-01
Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.
ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2017-05-01
We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.
2017-02-01
For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.
ERIC Educational Resources Information Center
Fielker, David
2007-01-01
Geoff Giles died suddenly in 2005. He was a highly original thinker in the field of geometry teaching. As early as 1964, when teaching at Strathallen School in Perth, he was writing in "MT27" about constructing tessellations by modifying the sides of triangles and (irregular) quadrilaterals to produce what he called "trisides" and "quadrisides".…
Learning Geometry by Designing Persian Mosaics
ERIC Educational Resources Information Center
Karssenberg, Goossen
2014-01-01
To encourage students to do geometry, the art of Islamic geometric ornamentation was chosen as the central theme of a lesson strand which was developed using the newly presented didactical tool called "Learning by Acting". The Dutch students who took these lessons in 2010 to 2013 were challenged to act as if they themselves were Persian…
ERIC Educational Resources Information Center
Weldeana, Hailu Nigus; Sbhatu, Desta Berhe
2017-01-01
Background: This article reports contributions of an assessment tool called Portfolio of Evidence (PE) in learning college geometry. Material and methods: Two classes of second-year students from one Ethiopian teacher education college, assigned into Treatment and Comparison classes, were participated. The assessment tools used in the Treatment…
On coupling fluid plasma and kinetic neutral physics models
Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...
2017-03-01
The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less
ERIC Educational Resources Information Center
Marazza, Lawrence L.
This book explores the necessity for building strong relationships among administrators, teachers, parents, and the community by applying what the book calls the five essentials of organizational excellence. The five essentials are planning strategically; benchmarking for excellence; leading collaboratively; engaging the community; and governing…
Benchmarking DoD Use of Additive Manufacturing and Quantifying Costs
2017-03-01
46 VI. Cost Benefit ...developing a cost model. The US Army Logistics Innovation Agency published a study called “Additive Manufacturing Cost - Benefit Analysis”. This...to over fifteen thousand dollars on GSA Advantage. Desktop printers do not require extensive support equipment. 47 VI. Cost Benefit
Locating the Mentor: An Autoethnographic Reflection
ERIC Educational Resources Information Center
Chaddock, Noelle
2017-01-01
Many mentoring conversations--especially those that pertain to junior faculty of color--cite concern about the socio-racial location of the mentor. This essay, an autoethnographic reflection by an academic of color, is a call to consider the characteristics of mentoring that have moved faculty successfully through their institutional benchmarks.…
NASA Astrophysics Data System (ADS)
Braitenberg, Carla; Sampietro, Daniele; Pivetta, Tommaso; Zuliani, David; Barbagallo, Alfio; Fabris, Paolo; Rossi, Lorenzo; Fabbri, Julius; Mansi, Ahmed Hamdi
2016-04-01
Underground caves bear a natural hazard due to their possible evolution into a sink hole. Mapping of all existing caves could be useful for general civil usages as natural deposits or tourism and sports. Natural caves exist globally and are typical in karst areas. We investigate the resolution power of modern gravity campaigns to systematically detect all void caves of a minimum size in a given area. Both aerogravity and terrestrial acquisitions are considered. Positioning of the gravity station is fastest with GNSS methods the performance of which is investigated. The estimates are based on a benchmark cave of which the geometry is known precisely through a laser-scan survey. The cave is the Grotta Gigante cave in NE Italy in the classic karst. The gravity acquisition is discussed, where heights have been acquired with dual-frequency geodetic GNSS receivers and Total Station. Height acquisitions with non-geodetic low-cost receivers are shown to be useful, although the error on the gravity field is larger. The cave produces a signal of -1.5 × 10-5 m/s2, with a clear elliptic geometry. We analyze feasibility of airborne gravity acquisitions for the purpose of systematically mapping void caves. It is found that observations from fixed wing aircraft cannot resolve the caves, but observations from slower and low-flying helicopters or drones do. In order to detect the presence of caves the size of the benchmark cave, systematic terrestrial acquisitions require a density of three stations on square 500 by 500 m2 tiles. The question has a large impact on civil and environmental purposes, since it will allow planning of urban development at a safe distance from subsurface caves. The survey shows that a systematic coverage of the karst would have the benefit to recover the position of all of the greater existing void caves.
CAPRI: A Geometric Foundation for Computational Analysis and Design
NASA Technical Reports Server (NTRS)
Haimes, Robert
2006-01-01
CAPRI is a software building tool-kit that refers to two ideas; (1) A simplified, object-oriented, hierarchical view of a solid part integrating both geometry and topology definitions, and (2) programming access to this part or assembly and any attached data. A complete definition of the geometry and application programming interface can be found in the document CAPRI: Computational Analysis PRogramming Interface appended to this report. In summary the interface is subdivided into the following functional components: 1. Utility routines -- These routines include the initialization of CAPRI, loading CAD parts and querying the operational status as well as closing the system down. 2. Geometry data-base queries -- This group of functions allow all top level applications to figure out and get detailed information on any geometric component in the Volume definition. 3. Point queries -- These calls allow grid generators, or solvers doing node adaptation, to snap points directly onto geometric entities. 4. Calculated or geometrically derived queries -- These entry points calculate data from the geometry to aid in grid generation. 5. Boundary data routines -- This part of CAPRI allows general data to be attached to Boundaries so that the boundary conditions can be specified and stored within CAPRI s data-base. 6. Tag based routines -- This part of the API allows the specification of properties associated with either the Volume (material properties) or Boundary (surface properties) entities. 7. Geometry based interpolation routines -- This part of the API facilitates Multi-disciplinary coupling and allows zooming through Boundary Attachments. 8. Geometric creation and manipulation -- These calls facilitate constructing simple solid entities and perform the Boolean solid operations. Geometry constructed in this manner has the advantage that if the data is kept consistent with the CAD package, therefore a new design can be incorporated directly and is manufacturable. 9. Master Model access This addition to the API allows for the querying of the parameters and dimensions of the model. The feature tree is also exposed so it is easy to see where the parameters are applied. Calls exist to allow for the modification of the parameters and the suppression/unsuppression of nodes in the tree. Part regeneration is performed by a single API call and a new part becomes available within CAPRI (if the regeneration was successful). This is described in a separate document. Components 1-7 are considered the CAPRI base level reader.
Experimental and analytical studies of a model helicopter rotor in hover
NASA Technical Reports Server (NTRS)
Caradonna, F. X.; Tung, C.
1981-01-01
A benchmark test to aid the development of various rotor performance codes was conducted. Simultaneous blade pressure measurements and tip vortex surveys were made for a wide range of tip Mach numbers including the transonic flow regime. The measured tip vortex strength and geometry permit effective blade loading predictions when used as input to a prescribed wake lifting surface code. It is also shown that with proper inflow and boundary layer modeling, the supercritical flow regime can be accurately predicted.
Anytime query-tuned kernel machine classifiers via Cholesky factorization
NASA Technical Reports Server (NTRS)
DeCoste, D.
2002-01-01
We recently demonstrated 2 to 64-fold query-time speedups of Support Vector Machine and Kernel Fisher classifiers via a new computational geometry method for anytime output bounds (DeCoste,2002). This new paper refines our approach in two key ways. First, we introduce a simple linear algebra formulation based on Cholesky factorization, yielding simpler equations and lower computational overhead. Second, this new formulation suggests new methods for achieving additional speedups, including tuning on query samples. We demonstrate effectiveness on benchmark datasets.
Schaffter, Thomas; Marbach, Daniel; Floreano, Dario
2011-08-15
Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.
The Distributive Property in Grade 3?
ERIC Educational Resources Information Center
Benson, Christine C.; Wall, Jennifer J.; Malm, Cheryl
2013-01-01
The Common Core State Standards for Mathematics (CCSSM) call for an in depth, integrated look at elementary school mathematical concepts. Some topics have been realigned to support an integration of topics leading to conceptual understanding. For example, the third-grade standards call for relating the concept of area (geometry) to multiplication…
Johnson, T K; Vessella, R L
1989-07-01
Dosimetry calculations of monoclonal antibodies (MABs) are made difficult because the focus of radioactivity is targeted for a nonstandard volume in a nonstandard geometry, precluding straightforward application of the MIRD formalism. The MABDOS software addresses this shortcoming by interactive placement of a spherical perturbation into the Standard Man geometry for each tumor focus. S tables are calculated by a Monte Carlo simulation of photon transport for each organ system (including tumor) that localizes activity. Performance benchmarks are reported that measure the time required to simulate 60,000 photons for each penetrating radiation in the spectrum of 99mTc and 131I using the kidney as source organ. Results indicate that calculation times are probably prohibitive on current microcomputer platforms. Mini and supercomputers offer a realistic platform for MABDOS patient dosimetry estimates.
Data Reduction Procedures for Laser Velocimeter Measurements in Turbomachinery Rotors
NASA Technical Reports Server (NTRS)
Lepicovsky, Jan
1994-01-01
Blade-to-blade velocity distributions based on laser velocimeter data acquired in compressor or fan rotors are increasingly used as benchmark data for the verification and calibration of turbomachinery computational fluid dynamics (CFD) codes. Using laser Doppler velocimeter (LDV) data for this purpose, however, must be done cautiously. Aside from the still not fully resolved issue of the seed particle response in complex flowfields, there is an important inherent difference between CFD predictions and LDV blade-to-blade velocity distributions. CFD codes calculate velocity fields for an idealized rotor passage. LDV data, on the other hand, stem from the actual geometry of all blade channels in a rotor. The geometry often varies from channel to channel as a result of manufacturing tolerances, assembly tolerances, and incurred operational damage or changes in the rotor individual blades.
A Study of Neutron Leakage in Finite Objects
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.
NASA Astrophysics Data System (ADS)
Zhao, H.; Fu, C.; Yu, D.; Wang, Z.; Hu, T.; Ruan, M.
2018-03-01
The design and optimization of the Electromagnetic Calorimeter (ECAL) are crucial for the Circular Electron Positron Collider (CEPC) project, a proposed future Higgs/Z factory. Following the reference design of the International Large Detector (ILD), a set of silicon-tungsten sampling ECAL geometries are implemented into the Geant4 simulation, whose performance is then scanned using Arbor algorithm. The photon energy response at different ECAL longitudinal structures is analyzed, and the separation performance between nearby photon showers with different ECAL transverse cell sizes is investigated and parametrized. The overall performance is characterized by a set of physics benchmarks, including νν H events where Higgs boson decays into a pair of photons (EM objects) or gluons (jets) and Z→τ+τ- events. Based on these results, we propose an optimized ECAL geometry for the CEPC project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefrancois, A.; L'Eplattenier, P.; Burger, M.
2006-02-13
Metallic tubes compressions in Z-current geometry were performed at the Cyclope facility from Gramat Research Center in order to study the behavior of metals under large strain at high strain rate. 3D configurations of cylinder compressions have been calculated here to benchmark the new beta version of the electromagnetism package coupled with the dynamics in Ls-Dyna and compared with the Cyclope experiments. The electromagnetism module is being developed in the general-purpose explicit and implicit finite element program LS-DYNA{reg_sign} in order to perform coupled mechanical/thermal/electromagnetism simulations. The Maxwell equations are solved using a Finite Element Method (FEM) for the solid conductorsmore » coupled with a Boundary Element Method (BEM) for the surrounding air (or vacuum). More details can be read in the references.« less
The benchmark aeroelastic models program: Description and highlights of initial results
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Eckstrom, Clinton V.; Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Durham, Michael H.
1991-01-01
An experimental effort was implemented in aeroelasticity called the Benchmark Models Program. The primary purpose of this program is to provide the necessary data to evaluate computational fluid dynamic codes for aeroelastic analysis. It also focuses on increasing the understanding of the physics of unsteady flows and providing data for empirical design. An overview is given of this program and some results obtained in the initial tests are highlighted. The tests that were completed include measurement of unsteady pressures during flutter of rigid wing with a NACA 0012 airfoil section and dynamic response measurements of a flexible rectangular wing with a thick circular arc airfoil undergoing shock boundary layer oscillations.
South Africa?s Increased Matriculation Passes: What Skunks behind the Rose?
ERIC Educational Resources Information Center
Monyooe, Lebusa; Tjatji, Martin; Mosese, Eulenda
2014-01-01
This article argues that the exponential increases in the Grade 12 (Matriculation) passes post 1994 do not necessarily translate to quality because of the low performance norms and standards set for passing Grade 12. It further calls for a serious reflection and interrogation of existing policies on performance, benchmarks, teacher education…
ERIC Educational Resources Information Center
Bills, Andrew M.; Giles, David; Rogers, Bev
2016-01-01
Dominant discourses on professional development for teachers internationally are increasingly geared to the priority of ensuring individual teachers are meeting prescribed standards-based performance benchmarks which we call "performativities" in this paper. While this intent is invariably played out in individualised performance…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-02
... is a data report which compiles and evaluates potential datasets and recommends which datasets are... add additional data points to datasets incorporated in the original SEDAR benchmark assessment and run... Conference Call Using updated datasets adopted during the Data Webinar, participants will employ assessment...
Managing Change to a Quality Philosophy: A Partnership Perspective.
ERIC Educational Resources Information Center
Snyder, Karolyn J.; Acker-Hocevar, Michele
Within the past 5 years there has been an international movement to adapt the principles and practices of Total Quality Management work environments to school-restructuring agendas. This paper reports on the development of a model called the Educational Quality System, a benchmark assessment tool for identifying the essential elements of quality…
Sixteen Trends...Their Profound Impact on Our Future
ERIC Educational Resources Information Center
Marx, Gary
2011-01-01
Seismic Shifts. Future Forces. Call them whatever you'd like. The Sixteen Trends revealed in this benchmark book will have a profound impact on our future. Noted futurist, educator, communicator, executive and leadership counsel, author, and international speaker Gary Marx makes the case for those trends and speculates on their implications for…
IRIS, Gender, and Student Achievement at University of Genova
ERIC Educational Resources Information Center
Bonfa, Antonella; Freddano, Michela
2012-01-01
The article analyses the gender effects on student achievement at University of Genova and it is a part of the research performed by the University of Genova called "Benchmarks interfaculty students: Development of a gender perspective to find strategies to understand what leads students to success in their studies", financed by the…
SPOC Benchmark Case: SNRE Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishal Patel; Michael Eades; Claude Russel Joyner II
The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations ofmore » the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.« less
ERIC Educational Resources Information Center
Boakes, Norma J.
2009-01-01
Within the study of geometry in the middle school curriculum is the natural development of students' spatial visualization, the ability to visualize two- and three-dimensional objects. The national mathematics standards call specifically for the development of such skills through hands-on experiences. A commonly accepted method is through the…
Indoor Modelling Benchmark for 3D Geometry Extraction
NASA Astrophysics Data System (ADS)
Thomson, C.; Boehm, J.
2014-06-01
A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.
Use of Apollo 17 Epoch Neutron Spectrum as a Benchmark in Testing LEND Collimated Sensor
NASA Technical Reports Server (NTRS)
Chin, Gordon; Sagdeev, R.; Milikh, G.
2011-01-01
The Apollo 17 neutron experiment LPNE provided a unique set of data on production of neutrons in the Lunar soil bombarded by Galactic Cosmic Rays (GCR). It serves as valuable "ground-truth" in the age of orbital remote sensing. We used the neutron data attributed to Apollo 17 epoch as a benchmark for testing the LEND's collimated sensor, as introduced by the geometry of collimator and efficiency of He3 counters. The latter is defined by the size of gas counter and pressure inside it. The intensity and energy spectrum of neutrons escaping the lunar surface are dependent on incident flux of Galactic Cosmic Rays (GCR) whose variability is associated with Solar Cycle and its peculiarities. We obtain first the share of neutrons entering through the field of view of collimator as a fraction of the total neutron flux by using the angular distribution of neutron exiting the Moon described by our Monte Carlo code. We computed next the count rate of the 3He sensor by using the neutron energy spectrum from McKinney et al. [JGR, 2006] and by consider geometry and gas pressure of the LEND sensor. Finally the neutron count rate obtained for the Apollo 17 epoch characterized by intermediate solar activity was adjusted to the LRO epoch characterized by low solar activity. It has been done by taking into account solar modulation potential, which affects the GCR flux, and in turn changes the neutron albedo flux.
Extensions to the integral line-beam method for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.
1995-08-01
A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less
Accelerating navigation in the VecGeom geometry modeller
NASA Astrophysics Data System (ADS)
Wenzel, Sandro; Zhang, Yang; pre="for the"> VecGeom Developers, 2017-10-01 The VecGeom geometry library is a relatively recent effort aiming to provide a modern and high performance geometry service for particle detector simulation in hierarchical detector geometries common to HEP experiments. One of its principal targets is the efficient use of vector SIMD hardware instructions to accelerate geometry calculations for single track as well as multi-track queries. Previously, excellent performance improvements compared to Geant4/ROOT could be reported for elementary geometry algorithms at the level of single shape queries. In this contribution, we will focus on the higher level navigation algorithms in VecGeom, which are the most important components as seen from the simulation engines. We will first report on our R&D effort and developments to implement SIMD enhanced data structures to speed up the well-known “voxelised” navigation algorithms, ubiquitously used for particle tracing in complex detector modules consisting of many daughter parts. Second, we will discuss complementary new approaches to improve navigation algorithms in HEP. These ideas are based on a systematic exploitation of static properties of the detector layout as well as automatic code generation and specialisation of the C++ navigator classes. Such specialisations reduce the overhead of generic- or virtual function based algorithms and enhance the effectiveness of the SIMD vector units. These novel approaches go well beyond the existing solutions available in Geant4 or TGeo/ROOT, achieve a significantly superior performance, and might be of interest for a wide range of simulation backends (GeantV, Geant4). We exemplify this with concrete benchmarks for the CMS and ALICE detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malkov, Victor N.; Rogers, David W.O.
The coupling of MRI and radiation treatment systems for the application of magnetic resonance guided radiation therapy necessitates a reliable magnetic field capable Monte Carlo (MC) code. In addition to the influence of the magnetic field on dose distributions, the question of proper calibration has arisen due to the several percent variation of ion chamber and solid state detector responses in magnetic fields when compared to the 0 T case (Reynolds et al., Med Phys, 2013). In the absence of a magnetic field, EGSnrc has been shown to pass the Fano cavity test (a rigorous benchmarking tool of MC codes)more » at the 0.1 % level (Kawrakow, Med.Phys, 2000), and similar results should be required of magnetic field capable MC algorithms. To properly test such developing MC codes, the Fano cavity theorem has been adapted to function in a magnetic field (Bouchard et al., PMB, 2015). In this work, the Fano cavity test is applied in a slab and ion-chamber-like geometries to test the transport options of an implemented magnetic field algorithm in EGSnrc. Results show that the deviation of the MC dose from the expected Fano cavity theory value is highly sensitive to the choice of geometry, and the ion chamber geometry appears to pass the test more easily than larger slab geometries. As magnetic field MC codes begin to be used for dose simulations and correction factor calculations, care must be taken to apply the most rigorous Fano test geometries to ensure reliability of such algorithms.« less
An Experimental Study of Characteristic Combustion-Driven Flow for CFD Validation
NASA Technical Reports Server (NTRS)
Santoro, Robert J.
1997-01-01
A series of uni-element rocket injector studies were completed to provide benchmark quality data needed to validate computational fluid dynamic models. A shear coaxial injector geometry was selected as the primary injector for study using gaseous hydrogen/oxygen and gaseous hydrogen/liquid oxygen propellants. Emphasis was placed on the use of nonintrusive diagnostic techniques to characterize the flowfields inside an optically-accessible rocket chamber. Measurements of the velocity and species fields were obtained using laser velocimetry and Raman spectroscopy, respectively. Qualitative flame shape information was also obtained using laser-induced fluorescence excited from OH radicals and laser light scattering studies of aluminum oxide particle seeded combusting flows. The gaseous hydrogen/liquid oxygen propellant studies for the shear coaxial injector focused on breakup mechanisms associated with the liquid oxygen jet under subcritical pressure conditions. Laser sheet illumination techniques were used to visualize the core region of the jet and a Phase Doppler Particle Analyzer was utilized for drop velocity, size and size distribution characterization. The results of these studies indicated that the shear coaxial geometry configuration was a relatively poor injector in terms of mixing. The oxygen core was observed to extend well downstream of the injector and a significant fraction of the mixing occurred in the near nozzle region where measurements were not possible to obtain. Detailed velocity and species measurements were obtained to allow CFD model validation and this set of benchmark data represents the most comprehensive data set available to date. As an extension of the investigation, a series of gas/gas injector studies were conducted in support of the X-33 Reusable Launch Vehicle program. A Gas/Gas Injector Technology team was formed consisting of the Marshall Space Flight Center, the NASA Lewis Research Center, Rocketdyne and Penn State. Injector geometries studied under this task included shear and swirl coaxial configurations as well as an impinging jet injector.
An Experimental Study of Characteristic Combustion-Driven Flow for CFD Validation
NASA Technical Reports Server (NTRS)
Santoro, Robert J.
1997-01-01
A series of uni-element rocket injector studies were completed to provide benchmark quality data needed to validate computational fluid dynamic models. A shear coaxial injector geometry was selected as the primary injector for study using gaseous hydrogen/oxygen and gaseous hydrogen/liquid oxygen propellants. Emphasis was placed on the use of non-intrusive diagnostic techniques to characterize the flowfields inside an optically-accessible rocket chamber. Measurements of the velocity and species fields were obtained using laser velocimetry and Raman spectroscopy, respectively Qualitative flame shape information was also obtained using laser-induced fluorescence excited from OH radicals and laser light scattering studies of aluminum oxide particle seeded combusting flows. The gaseous hydrogen/liquid oxygen propellant studies for the shear coaxial injector focused on breakup mechanisms associated with the liquid oxygen jet under sub-critical pressure conditions. Laser sheet illumination techniques were used to visualize the core region of the jet and a Phase Doppler Particle Analyzer was utilized for drop velocity, size and size distribution characterization. The results of these studies indicated that the shear coaxial geometry configuration was a relatively poor injector in terms of mixing. The oxygen core was observed to extend well downstream of the injector and a significant fraction of the mixing occurred in the near nozzle region where measurements were not possible to obtain Detailed velocity and species measurements were obtained to allow CFD model validation and this set of benchmark data represents the most comprehensive data set available to date As an extension of the investigation, a series of gas/gas injector studies were conducted in support of the X-33 Reusable Launch Vehicle program. A Gas/Gas Injector Technology team was formed consisting of the Marshall Space Flight Center, the NASA Lewis Research Center, Rocketdyne and Penn State. Injector geometries studied under this task included shear and swirl coaxial configurations as well as an impinging jet injector.
Balzani, Daniel; Deparis, Simone; Fausten, Simon; Forti, Davide; Heinlein, Alexander; Klawonn, Axel; Quarteroni, Alfio; Rheinbach, Oliver; Schröder, Joerg
2016-10-01
The accurate prediction of transmural stresses in arterial walls requires on the one hand robust and efficient numerical schemes for the solution of boundary value problems including fluid-structure interactions and on the other hand the use of a material model for the vessel wall that is able to capture the relevant features of the material behavior. One of the main contributions of this paper is the application of a highly nonlinear, polyconvex anisotropic structural model for the solid in the context of fluid-structure interaction, together with a suitable discretization. Additionally, the influence of viscoelasticity is investigated. The fluid-structure interaction problem is solved using a monolithic approach; that is, the nonlinear system is solved (after time and space discretizations) as a whole without splitting among its components. The linearized block systems are solved iteratively using parallel domain decomposition preconditioners. A simple - but nonsymmetric - curved geometry is proposed that is demonstrated to be suitable as a benchmark testbed for fluid-structure interaction simulations in biomechanics where nonlinear structural models are used. Based on the curved benchmark geometry, the influence of different material models, spatial discretizations, and meshes of varying refinement is investigated. It turns out that often-used standard displacement elements with linear shape functions are not sufficient to provide good approximations of the arterial wall stresses, whereas for standard displacement elements or F-bar formulations with quadratic shape functions, suitable results are obtained. For the time discretization, a second-order backward differentiation formula scheme is used. It is shown that the curved geometry enables the analysis of non-rotationally symmetric distributions of the mechanical fields. For instance, the maximal shear stresses in the fluid-structure interface are found to be higher in the inner curve that corresponds to clinical observations indicating a high plaque nucleation probability at such locations. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2012-06-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less
RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, G.; Epiney, A. S.
2012-07-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less
Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.
Janiga, Gábor
2014-04-01
This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Natto, S A; Lewis, D G; Ryde, S J
1998-01-01
The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.
Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.
2007-03-01
In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.
Machine learning spatial geometry from entanglement features
NASA Astrophysics Data System (ADS)
You, Yi-Zhuang; Yang, Zhao; Qi, Xiao-Liang
2018-02-01
Motivated by the close relations of the renormalization group with both the holography duality and the deep learning, we propose that the holographic geometry can emerge from deep learning the entanglement feature of a quantum many-body state. We develop a concrete algorithm, call the entanglement feature learning (EFL), based on the random tensor network (RTN) model for the tensor network holography. We show that each RTN can be mapped to a Boltzmann machine, trained by the entanglement entropies over all subregions of a given quantum many-body state. The goal is to construct the optimal RTN that best reproduce the entanglement feature. The RTN geometry can then be interpreted as the emergent holographic geometry. We demonstrate the EFL algorithm on a 1D free fermion system and observe the emergence of the hyperbolic geometry (AdS3 spatial geometry) as we tune the fermion system towards the gapless critical point (CFT2 point).
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta
2018-03-13
Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.
NASA Astrophysics Data System (ADS)
Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel
2014-05-01
A large variety of geodynamical problems can be viewed as a solid/fluid interaction problem coupling two bodies with different physics. In particular the lithosphere/asthenosphere mechanical interaction in subduction zones belongs to this kind of problem, where the solid lithosphere is embedded in the asthenospheric viscous fluid. In many fields (Industry, Civil Engineering,etc.), in which deformations of solid and fluid are "small", numerical modelers consider the exact discretization of both domains and fit as well as possible the shape of the interface between the two domains, solving the discretized physic problems by the Finite Element Method (FEM). Although, in a context of subduction, the lithosphere is submitted to large deformation, and can evolve into a complex geometry, thus leading to important deformation of the surrounding asthenosphere. To alleviate the precise meshing of complex geometries, numerical modelers have developed non-matching interface methods called Fictitious Domain Methods (FDM). The main idea of these methods is to extend the initial problem to a bigger (and simpler) domain. In our version of FDM, we determine the forces at the immersed solid boundary required to minimize (at the least square sense) the difference between fluid and solid velocities at this interface. This method is first-order accurate and the stability depends on the ratio between the fluid background mesh size and the interface discretization. We present the formulation and provide benchmarks and examples showing the potential of the method : 1) A comparison with an analytical solution of a viscous flow around a rigid body. 2) An experiment of a rigid sphere sinking in a viscous fluid (in two and three dimensional cases). 3) A comparison with an analog subduction experiment. Another presentation aims at describing the geodynamical application of this method to Andean subduction dynamics, studying cyclic slab folding on the 660 km discontinuity, and its relationship with flat subduction.
Light-duty vehicle greenhouse gas (GHG) and fuel economy (FE) standards for MYs 2012 -2025 are requiring vehicle powertrains to become much more efficient. The EPA is using a full vehicle simulation model, called the Advanced Light-duty Powertrain and Hybrid Analysis (ALPHA), to ...
A Psychometric Analysis of Teacher-Made Benchmark Assessments in English Language Arts
ERIC Educational Resources Information Center
Milligan, Andrea
2017-01-01
The implementation of the Common Core State Standards (CCSS) has placed increased accountability for outcomes on both students and teachers. To address the current youth literacy crisis in the United States, the CCSS call for students to read increasingly complex informational and literary texts. Since teachers are held accountable for students'…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; McKnight, R. D.; Tsiboulia, A.
2010-09-30
Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physicsmore » benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specificationsa and has historically been used as a data validation benchmark assembly. Loading of ZPR-3 Assembly 11 began in early January 1958, and the Assembly 11 program ended in late January 1958. The core consisted of highly enriched uranium (HEU) plates and depleted uranium plates loaded into stainless steel drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of two columns of 0.125 in.-wide (3.175 mm) HEU plates, six columns of 0.125 in.-wide (3.175 mm) depleted uranium plates and one column of 1.0 in.-wide (25.4 mm) depleted uranium plates. The length of each column was 10 in. (254.0 mm) in each half of the core. The axial blanket consisted of 12 in. (304.8 mm) of depleted uranium behind the core. The thickness of the depleted uranium radial blanket was approximately 14 in. (355.6 mm), and the length of the radial blanket in each half of the matrix was 22 in. (558.8 mm). The assembly geometry approximated a right circular cylinder as closely as the square matrix tubes allowed. According to the logbook and loading records for ZPR-3/11, the reference critical configuration was loading 10 which was critical on January 21, 1958. Subsequent loadings were very similar but less clean for criticality because there were modifications made to accommodate reactor physics measurements other than criticality. Accordingly, ZPR-3/11 loading 10 was selected as the only configuration for this benchmark. As documented below, it was determined to be acceptable as a criticality safety benchmark experiment. A very accurate transformation to a simplified model is needed to make any ZPR assembly a practical criticality-safety benchmark. There is simply too much geometric detail in an exact (as-built) model of a ZPR assembly, even a clean core such as ZPR-3/11 loading 10. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation is described in Section 3. It was obtained using a pair of continuous-energy Monte Carlo calculations. First, the critical configuration was modeled in full detail - every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from the detailed as-built model were used to construct a homogeneous, two-dimensional (RZ) model of ZPR-3/11 that conserved the mass of each nuclide and volume of each region. The simple cylindrical model is the criticality-safety benchmark model. The difference in the calculated k{sub eff} values between the as-built three-dimensional model and the homogeneous two-dimensional benchmark model was used to adjust the measured excess reactivity of ZPR-3/11 loading 10 to obtain the k{sub eff} for the benchmark model.« less
Realistic simulations of a cyclotron spiral inflector within a particle-in-cell framework
NASA Astrophysics Data System (ADS)
Winklehner, Daniel; Adelmann, Andreas; Gsell, Achim; Kaman, Tulin; Campo, Daniela
2017-12-01
We present an upgrade to the particle-in-cell ion beam simulation code opal that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron. This upgrade includes a new geometry class and field solver that can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields. Results are benchmarked against the analytical solution of a coasting beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV /amu test cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation results. We find that opal can now handle arbitrary boundary geometries with relative ease. Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.
Simplified DFT methods for consistent structures and energies of large systems
NASA Astrophysics Data System (ADS)
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
Upgrades for the CMS simulation
Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...
2015-05-22
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
NASA Technical Reports Server (NTRS)
Davis, G. J.
1994-01-01
One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
One-dimensional photonic crystal optical limiter.
Soon, Boon Yi; Haus, Joseph; Scalora, Michael; Sibilia, Concita
2003-08-25
We explore a new passive optical limiter design using transverse modulation instability in the one-dimensional photonic crystal (PC) using x(3) materials. The performance of PC optical limiters strongly depends on the choice of the materials and the geometry and it improves as the duration of the incident pulse is extended. PC optical limiter performance is compared with that of a device made from homogeneous material. We identify three criteria for benchmarking the PC optical limiter. We also include a discussion of the advantages and disadvantages of PC optical limiters for real world applications.
Renormalization group contraction of tensor networks in three dimensions
NASA Astrophysics Data System (ADS)
García-Sáez, Artur; Latorre, José I.
2013-02-01
We present a new strategy for contracting tensor networks in arbitrary geometries. This method is designed to follow as strictly as possible the renormalization group philosophy, by first contracting tensors in an exact way and, then, performing a controlled truncation of the resulting tensor. We benchmark this approximation procedure in two dimensions against an exact contraction. We then apply the same idea to a three-dimensional quantum system. The underlying rational for emphasizing the exact coarse graining renormalization group step prior to truncation is related to monogamy of entanglement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patra, Anirban; Tome, Carlos
This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.
Benchmarking and Self-Assessment in the Wine Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galitsky, Christina; Radspieler, Anthony; Worrell, Ernst
2005-12-01
Not all industrial facilities have the staff or theopportunity to perform a detailed audit of their operations. The lack ofknowledge of energy efficiency opportunities provides an importantbarrier to improving efficiency. Benchmarking programs in the U.S. andabroad have shown to improve knowledge of the energy performance ofindustrial facilities and buildings and to fuel energy managementpractices. Benchmarking provides a fair way to compare the energyintensity of plants, while accounting for structural differences (e.g.,the mix of products produced, climate conditions) between differentfacilities. In California, the winemaking industry is not only one of theeconomic pillars of the economy; it is also a large energymore » consumer, witha considerable potential for energy-efficiency improvement. LawrenceBerkeley National Laboratory and Fetzer Vineyards developed the firstbenchmarking tool for the California wine industry called "BEST(Benchmarking and Energy and water Savings Tool) Winery". BEST Wineryenables a winery to compare its energy efficiency to a best practicereference winery. Besides overall performance, the tool enables the userto evaluate the impact of implementing efficiency measures. The toolfacilitates strategic planning of efficiency measures, based on theestimated impact of the measures, their costs and savings. The tool willraise awareness of current energy intensities and offer an efficient wayto evaluate the impact of future efficiency measures.« less
Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian
In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less
Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules
Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian; ...
2017-12-12
In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
Proposal of an innovative benchmark for comparison of the performance of contactless digitizers
NASA Astrophysics Data System (ADS)
Iuliano, Luca; Minetola, Paolo; Salmi, Alessandro
2010-10-01
Thanks to the improving performances of 3D optical scanners, in terms of accuracy and repeatability, reverse engineering applications have extended from CAD model design or reconstruction to quality control. Today, contactless digitizing devices constitute a good alternative to coordinate measuring machines (CMMs) for the inspection of certain parts. The German guideline VDI/VDE 2634 is the only reference to evaluate whether 3D optical measuring systems comply with the declared or required performance specifications. Nevertheless it is difficult to compare the performance of different scanners referring to such a guideline. An adequate novel benchmark is proposed in this paper: focusing on the inspection of production tools (moulds), the innovative test piece was designed using common geometries and free-form surfaces. The reference part is intended to be employed for the evaluation of the performance of several contactless digitizing devices in computer-aided inspection, considering dimensional and geometrical tolerances as well as other quantitative and qualitative criteria.
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
Designing Geometry 2.0 learning environments: a preliminary study with primary school students
NASA Astrophysics Data System (ADS)
Joglar Prieto, Nuria; María Sordo Juanena, José; Star, Jon R.
2014-04-01
The information and communication technologies of Web 2.0 are arriving in our schools, allowing the design and implementation of new learning environments with great educational potential. This article proposes a pedagogical model based on a new geometry technology-integrated learning environment, called Geometry 2.0, which was tested with 39 sixth grade students from a public school in Madrid (Spain). The main goals of the study presented here were to describe an optimal role for the mathematics teacher within Geometry 2.0, and to analyse how dynamic mathematics and communication might affect young students' learning of basic figural concepts in a real setting. The analyses offered in this article illustrate how our Geometry 2.0 model facilitates deeply mathematical tasks which encourage students' exploration, cooperation and communication, improving their learning while fostering geometrical meanings.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
The solid angle (geometry factor) for a spherical surface source and an arbitrary detector aperture
Favorite, Jeffrey A.
2016-01-13
It is proven that the solid angle (or geometry factor, also called the geometrical efficiency) for a spherically symmetric outward-directed surface source with an arbitrary radius and polar angle distribution and an arbitrary detector aperture is equal to the solid angle for an isotropic point source located at the center of the spherical surface source and the same detector aperture.
NASA Astrophysics Data System (ADS)
Ochiai, T.; Nacher, J. C.
2011-09-01
Recently, the application of geometry and conformal mappings to artificial materials (metamaterials) has attracted the attention in various research communities. These materials, characterized by a unique man-made structure, have unusual optical properties, which materials found in nature do not exhibit. By applying the geometry and conformal mappings theory to metamaterial science, it may be possible to realize so-called "Harry Potter cloaking device". Although such a device is still in the science fiction realm, several works have shown that by using such metamaterials it may be possible to control the direction of the electromagnetic field at will. We could then make an object hidden inside of a cloaking device. Here, we will explain how to design invisibility device using differential geometry and conformal mappings.
DIATOM (Data Initialization and Modification) Library Version 7.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, David A.; Schmitt, Robert G.; Hensinger, David M.
DIATOM is a library that provides numerical simulation software with a computational geometry front end that can be used to build up complex problem geometries from collections of simpler shapes. The library provides a parser which allows for application-independent geometry descriptions to be embedded in simulation software input decks. Descriptions take the form of collections of primitive shapes and/or CAD input files and material properties that can be used to describe complex spatial and temporal distributions of numerical quantities (often called “database variables” or “fields”) to help define starting conditions for numerical simulations. The capability is designed to be generalmore » purpose, robust and computationally efficient. By using a combination of computational geometry and recursive divide-and-conquer approximation techniques, a wide range of primitive shapes are supported to arbitrary degrees of fidelity, controllable through user input and limited only by machine resources. Through the use of call-back functions, numerical simulation software can request the value of a field at any time or location in the problem domain. Typically, this is used only for defining initial conditions, but the capability is not limited to just that use. The most recent version of DIATOM provides the ability to import the solution field from one numerical solution as input for another.« less
Danielson, Michelle E.; Beck, Thomas J.; Karlamangla, Arun S.; Greendale, Gail A.; Atkinson, Elizabeth J.; Lian, Yinjuan; Khaled, Alia S.; Keaveny, Tony M.; Kopperdahl, David; Ruppert, Kristine; Greenspan, Susan; Vuga, Marike; Cauley, Jane A.
2013-01-01
Purpose Simple 2-dimensional (2D) analyses of bone strength can be done with dual energy x-ray absorptiometry (DXA) data and applied to large data sets. We compared 2D analyses to 3-dimensional (3D) finite element analyses (FEA) based on quantitative computed tomography (QCT) data. Methods 213 women participating in the Study of Women’s Health across the Nation (SWAN) received hip DXA and QCT scans. DXA BMD and femoral neck diameter and axis length were used to estimate geometry for composite bending (BSI) and compressive strength (CSI) indices. These and comparable indices computed by Hip Structure Analysis (HSA) on the same DXA data were compared to indices using QCT geometry. Simple 2D engineering simulations of a fall impacting on the greater trochanter were generated using HSA and QCT femoral neck geometry; these estimates were benchmarked to a 3D FEA of fall impact. Results DXA-derived CSI and BSI computed from BMD and by HSA correlated well with each other (R= 0.92 and 0.70) and with QCT-derived indices (R= 0.83–0.85 and 0.65–0.72). The 2D strength estimate using HSA geometry correlated well with that from QCT (R=0.76) and with the 3D FEA estimate (R=0.56). Conclusions Femoral neck geometry computed by HSA from DXA data corresponds well enough to that from QCT for an analysis of load stress in the larger SWAN data set. Geometry derived from BMD data performed nearly as well. Proximal femur breaking strength estimated from 2D DXA data is not as well correlated with that derived by a 3D FEA using QCT data. PMID:22810918
ERIC Educational Resources Information Center
Million, Laura; Dickman, Anneliese; Henken, Rob
2010-01-01
While the Milwaukee region's economic base is rooted in its manufacturing history, many believe that the region's future prosperity will be tied to its ability to successfully transition its economy into one that is based on knowledge and innovation. Indeed, fostering innovation has become the call to action for business and political leaders…
ERIC Educational Resources Information Center
Bitran, Stella; Morissette, Sandra B.; Spiegel, David A.; Barlow, David H.
2008-01-01
This report presents results of a treatment for panic disorder with moderate to severe agoraphobia (PDA-MS) called sensation-focused intensive treatment (SFIT). SFIT is an 8-day intensive treatment that combines features of cognitive-behavioral treatment for panic disorder, such as interoceptive exposure and cognitive restructuring with ungraded…
[Five years of ROM in substance abuse treatment centres in the Netherlands].
Oudejans, S C C; Schippers, G M; Spits, M E; Stollenga, M; van den Brink, W
2012-01-01
Three substance abuse treatment centres set up a benchmarking project for routine outcome management (ROM) of structured cognitive behavioral treatments for outpatients with a substance use disorder. To present the results of five years benchmarking. All patients were included at intake and the follow-up assessment was performed by a call-center nine months later. Twice a year aggregated data were fed back to management and treatment teams. Since 2005, clinical outcome data, including substance abuse data, have been collected for more than half of all 15.786 treated patients. At follow-up, nine months after intake, 23% was abstinent, 28% reported moderate substance use and 49% reported excessive substance use. The Dutch centres for the treatment of substance abuse were successful in setting up ROM projects to monitor and compare the development and the effects of outpatient addiction treatments. The clinical results are acceptable and correspond to the results of the American project called match. It is not yet clear whether the biannual feedback of aggregated outcomes to management and treatment teams has contributed to the creation of learning organisations, but it has provided transparency and has made it possible for teams to learn from the outcomes.
Design of Tailored Non-Crimp Fabrics Based on Stitching Geometry
NASA Astrophysics Data System (ADS)
Krieger, Helga; Gries, Thomas; Stapleton, Scott E.
2018-02-01
Automation of the preforming process brings up two opposing requirements for the used engineering fabric. On the one hand, the fabric requires a sufficient drapeability, or low shear stiffness, for forming into double-curved geometries; but on the other hand, the fabric requires a high form stability, or high shear stiffness, for automated handling. To meet both requirements tailored non-crimp fabrics (TNCFs) are proposed. While the stitching has little structural influence on the final part, it virtually dictates the TNCFs local capability to shear and drape over a mold during preforming. The shear stiffness of TNCFs is designed by defining the local stitching geometry. NCFs with chain stitch have a comparatively high shear stiffness and NCFs with a stitch angle close to the symmetry stitch angle have a very low shear stiffness. A method to design the component specific local stitching parameters of TNCFs is discussed. For validation of the method, NCFs with designed tailored stitching parameters were manufactured and compared to benchmark NCFs with uniform stitching parameters. The designed TNCFs showed both, generally a high form stability and in locally required zones a good drapeability, in drape experiments over an elongated hemisphere.
The Development of the Ducted Fan Noise Propagation and Radiation Code CDUCT-LaRC
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Farassat, F.; Pope, D. Stuart; Vatsa, Veer
2003-01-01
The development of the ducted fan noise propagation and radiation code CDUCT-LaRC at NASA Langley Research Center is described. This code calculates the propagation and radiation of given acoustic modes ahead of the fan face or aft of the exhaust guide vanes in the inlet or exhaust ducts, respectively. This paper gives a description of the modules comprising CDUCT-LaRC. The grid generation module provides automatic creation of numerical grids for complex (non-axisymmetric) geometries that include single or multiple pylons. Files for performing automatic inviscid mean flow calculations are also generated within this module. The duct propagation is based on the parabolic approximation theory of R. P. Dougherty. This theory allows the handling of complex internal geometries and the ability to study the effect of non-uniform (i.e. circumferentially and axially segmented) liners. Finally, the duct radiation module is based on the Ffowcs Williams-Hawkings (FW-H) equation with a penetrable data surface. Refraction of sound through the shear layer between the external flow and bypass duct flow is included. Results for benchmark annular ducts, as well as other geometries with pylons, are presented and compared with available analytical data.
The infinite medium Green's function for neutron transport in plane geometry 40 years later
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.
1993-01-01
In 1953, the first of what was supposed to be two volumes on neutron transport theory was published. The monograph, entitled [open quotes]Introduction to the Theory of Neutron Diffusion[close quotes] by Case et al., appeared as a Los Alamos National Laboratory report and was to be followed by a second volume, which never appeared as intended because of the death of Placzek. Instead, Case and Zweifel collaborated on the now classic work entitled Linear Transport Theory 2 in which the underlying mathematical theory of linear transport was presented. The initial monograph, however, represented the coming of age of neutron transportmore » theory, which had its roots in radiative transfer and kinetic theory. In addition, it provided the first benchmark results along with the mathematical development for several fundamental neutron transport problems. In particular, one-dimensional infinite medium Green's functions for the monoenergetic transport equation in plane and spherical geometries were considered complete with numerical results to be used as standards to guide code development for applications. Unfortunately, because of the limited computational resources of the day, some numerical results were incorrect. Also, only conventional mathematics and numerical methods were used because the transport theorists of the day were just becoming acquainted with more modern mathematical approaches. In this paper, Green's function solution is revisited in light of modern numerical benchmarking methods with an emphasis on evaluation rather than theoretical results. The primary motivation for considering the Green's function at this time is its emerging use in solving finite and heterogeneous media transport problems.« less
Global height datum unification: a new approach in gravity potential space
NASA Astrophysics Data System (ADS)
Ardalan, A. A.; Safari, A.
2005-12-01
The problem of “global height datum unification” is solved in the gravity potential space based on: (1) high-resolution local gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid’s potential. The high-resolution local gravity field model is derived based on a solution of the fixed-free two-boundary-value problem of the Earth’s gravity field using (a) potential difference values (from precise leveling), (b) modulus of the gravity vector (from gravimetry), (c) astronomical longitude and latitude (from geodetic astronomy and/or combination of (GNSS) Global Navigation Satellite System observations with total station measurements), (d) and satellite altimetry. Knowing the height of the reference benchmark in the national height system and its geocentric GNSS coordinates, and using the derived high-resolution local gravity field model, the gravity potential value of the zero point of the height system is computed. The difference between the derived gravity potential value of the zero point of the height system and the geoid’s potential value is computed. This potential difference gives the offset of the zero point of the height system from geoid in the “potential space”, which is transferred into “geometry space” using the transformation formula derived in this paper. The method was applied to the computation of the offset of the zero point of the Iranian height datum from the geoid’s potential value W 0=62636855.8 m2/s2. According to the geometry space computations, the height datum of Iran is 0.09 m below the geoid.
Marshall, Margaret A.
2014-11-04
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less
Talaminos-Barroso, Alejandro; Estudillo-Valderrama, Miguel A; Roa, Laura M; Reina-Tosina, Javier; Ortega-Ruiz, Francisco
2016-06-01
M2M (Machine-to-Machine) communications represent one of the main pillars of the new paradigm of the Internet of Things (IoT), and is making possible new opportunities for the eHealth business. Nevertheless, the large number of M2M protocols currently available hinders the election of a suitable solution that satisfies the requirements that can demand eHealth applications. In the first place, to develop a tool that provides a benchmarking analysis in order to objectively select among the most relevant M2M protocols for eHealth solutions. In the second place, to validate the tool with a particular use case: the respiratory rehabilitation. A software tool, called Distributed Computing Framework (DFC), has been designed and developed to execute the benchmarking tests and facilitate the deployment in environments with a large number of machines, with independence of the protocol and performance metrics selected. DDS, MQTT, CoAP, JMS, AMQP and XMPP protocols were evaluated considering different specific performance metrics, including CPU usage, memory usage, bandwidth consumption, latency and jitter. The results obtained allowed to validate a case of use: respiratory rehabilitation of chronic obstructive pulmonary disease (COPD) patients in two scenarios with different types of requirement: Home-Based and Ambulatory. The results of the benchmark comparison can guide eHealth developers in the choice of M2M technologies. In this regard, the framework presented is a simple and powerful tool for the deployment of benchmark tests under specific environments and conditions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
FDA Benchmark Medical Device Flow Models for CFD Validation.
Malinauskas, Richard A; Hariharan, Prasanna; Day, Steven W; Herbertson, Luke H; Buesen, Martin; Steinseifer, Ulrich; Aycock, Kenneth I; Good, Bryan C; Deutsch, Steven; Manning, Keefe B; Craven, Brent A
Computational fluid dynamics (CFD) is increasingly being used to develop blood-contacting medical devices. However, the lack of standardized methods for validating CFD simulations and blood damage predictions limits its use in the safety evaluation of devices. Through a U.S. Food and Drug Administration (FDA) initiative, two benchmark models of typical device flow geometries (nozzle and centrifugal blood pump) were tested in multiple laboratories to provide experimental velocities, pressures, and hemolysis data to support CFD validation. In addition, computational simulations were performed by more than 20 independent groups to assess current CFD techniques. The primary goal of this article is to summarize the FDA initiative and to report recent findings from the benchmark blood pump model study. Discrepancies between CFD predicted velocities and those measured using particle image velocimetry most often occurred in regions of flow separation (e.g., downstream of the nozzle throat, and in the pump exit diffuser). For the six pump test conditions, 57% of the CFD predictions of pressure head were within one standard deviation of the mean measured values. Notably, only 37% of all CFD submissions contained hemolysis predictions. This project aided in the development of an FDA Guidance Document on factors to consider when reporting computational studies in medical device regulatory submissions. There is an accompanying podcast available for this article. Please visit the journal's Web site (www.asaiojournal.com) to listen.
Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory
NASA Astrophysics Data System (ADS)
Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre
2016-05-01
Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.
NASA Astrophysics Data System (ADS)
Hu, Qiang
2017-09-01
We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.
Numerical algebraic geometry: a new perspective on gauge and string theories
NASA Astrophysics Data System (ADS)
Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.
2012-07-01
There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.
NASA Astrophysics Data System (ADS)
He, Yang-Hui; Matti, Cyril; Sun, Chuang
2014-10-01
The so-called Scattering Equations which govern the kinematics of the scattering of massless particles in arbitrary dimensions have recently been cast into a system of homogeneous polynomials. We study these as affine and projective geometries which we call Scattering Varieties by analyzing such properties as Hilbert series, Euler characteristic and singularities. Interestingly, we find structures such as affine Calabi-Yau threefolds as well as singular K3 and Fano varieties.
Fingerprinting sea-level variations in response to continental ice loss: a benchmark exercise
NASA Astrophysics Data System (ADS)
Barletta, Valentina R.; Spada, Giorgio; Riva, Riccardo E. M.; James, Thomas S.; Simon, Karen M.; van der Wal, Wouter; Martinec, Zdenek; Klemann, Volker; Olsson, Per-Anders; Hagedoorn, Jan; Stocchi, Paolo; Vermeersen, Bert
2013-04-01
Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying Glacial Isostatic Adjustment (GIA) can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Here we present the results of a benchmark exercise of independently developed codes designed to solve the SLE. The study involves predictions of current sea level changes due to present-day ice mass loss. In spite of the differences in the methods employed, the comparison shows that a significant number of GIA modellers can reproduce their sea-level computations within 2% for well defined, large-scale present-day ice mass changes. Smaller and more detailed loads need further and dedicated benchmarking and high resolution computation. This study shows how the details of the implementation and the inputs specifications are an important, and often underappreciated, aspect. Hence this represents a step toward the assessment of reliability of sea level projections obtained with benchmarked SLE codes.
BEST Winery Guidebook: Benchmarking and Energy and Water SavingsTool for the Wine Industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galitsky, Christina; Worrell, Ernst; Radspieler, Anthony
2005-10-15
Not all industrial facilities have the staff or the opportunity to perform a detailed audit of their operations. The lack of knowledge of energy efficiency opportunities provides an important barrier to improving efficiency. Benchmarking has demonstrated to help energy users understand energy use and the potential for energy efficiency improvement, reducing the information barrier. In California, the wine making industry is not only one of the economic pillars of the economy; it is also a large energy consumer, with a considerable potential for energy-efficiency improvement. Lawrence Berkeley National Laboratory and Fetzer Vineyards developed an integrated benchmarking and self-assessment tool formore » the California wine industry called ''BEST''(Benchmarking and Energy and water Savings Tool) Winery. BEST Winery enables a winery to compare its energy efficiency to a best practice winery, accounting for differences in product mix and other characteristics of the winery. The tool enables the user to evaluate the impact of implementing energy and water efficiency measures. The tool facilitates strategic planning of efficiency measures, based on the estimated impact of the measures, their costs and savings. BEST Winery is available as a software tool in an Excel environment. This report serves as background material, documenting assumptions and information on the included energy and water efficiency measures. It also serves as a user guide for the software package.« less
NASA Astrophysics Data System (ADS)
Karovicova, I.; White, T. R.; Nordlander, T.; Lind, K.; Casagrande, L.; Ireland, M. J.; Huber, D.; Creevey, O.; Mourard, D.; Schaefer, G. H.; Gilmore, G.; Chiavassa, A.; Wittkowski, M.; Jofré, P.; Heiter, U.; Thévenin, F.; Asplund, M.
2018-03-01
Large stellar surveys of the Milky Way require validation with reference to a set of `benchmark' stars whose fundamental properties are well determined. For metal-poor benchmark stars, disagreement between spectroscopic and interferometric effective temperatures has called the reliability of the temperature scale into question. We present new interferometric measurements of three metal-poor benchmark stars, HD 140283, HD 122563, and HD 103095, from which we determine their effective temperatures. The angular sizes of all the stars were determined from observations with the PAVO beam combiner at visible wavelengths at the CHARA array, with additional observations of HD 103095 made with the VEGA instrument, also at the CHARA array. Together with photometrically derived bolometric fluxes, the angular diameters give a direct measurement of the effective temperature. For HD 140283, we find θLD = 0.324 ± 0.005 mas, Teff = 5787 ± 48 K; for HD 122563, θLD = 0.926 ± 0.011 mas, Teff = 4636 ± 37 K; and for HD 103095, θLD = 0.595 ± 0.007 mas, Teff = 5140 ± 49 K. Our temperatures for HD 140283 and HD 103095 are hotter than the previous interferometric measurements by 253 and 322 K, respectively. We find good agreement between our temperatures and recent spectroscopic and photometric estimates. We conclude some previous interferometric measurements have been affected by systematic uncertainties larger than their quoted errors.
Systematic comparison of variant calling pipelines using gold standard personal exome variants
Hwang, Sohyun; Kim, Eiru; Lee, Insuk; Marcotte, Edward M.
2015-01-01
The success of clinical genomics using next generation sequencing (NGS) requires the accurate and consistent identification of personal genome variants. Assorted variant calling methods have been developed, which show low concordance between their calls. Hence, a systematic comparison of the variant callers could give important guidance to NGS-based clinical genomics. Recently, a set of high-confident variant calls for one individual (NA12878) has been published by the Genome in a Bottle (GIAB) consortium, enabling performance benchmarking of different variant calling pipelines. Based on the gold standard reference variant calls from GIAB, we compared the performance of thirteen variant calling pipelines, testing combinations of three read aligners—BWA-MEM, Bowtie2, and Novoalign—and four variant callers—Genome Analysis Tool Kit HaplotypeCaller (GATK-HC), Samtools mpileup, Freebayes and Ion Proton Variant Caller (TVC), for twelve data sets for the NA12878 genome sequenced by different platforms including Illumina2000, Illumina2500, and Ion Proton, with various exome capture systems and exome coverage. We observed different biases toward specific types of SNP genotyping errors by the different variant callers. The results of our study provide useful guidelines for reliable variant identification from deep sequencing of personal genomes. PMID:26639839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, Ivan; Aksyutina, Yuliya; Tietze-Jaensch, Holger
2013-07-01
Burn up calculations facilitate a determination of the composition and nuclear inventory of spent nuclear fuel, if operational history is known. In case this information is not available, the total nuclear inventory can be determined by means of destructive or, even on industrial scale, nondestructive measurement methods. For non-destructive measurements however only a few easy-to-measure, so-called key nuclides, are determined due to their characteristic gamma lines or neutron emission. From these measured activities the fuel burn up and cooling time are derived to facilitate the numerical inventory determination of spent fuel elements. Most regulatory bodies require an independent assessment ofmore » nuclear waste properties and their documentation. Prominent part of this assessment is a consistency check of inventory declaration. The waste packages often contain wastes from different types of spent fuels of different history and information about the secondary reactor parameters may not be available. In this case the so-called characteristic fuel burn up and cooling time are determined. These values are obtained from a correlations involving key-nuclides with a certain bandwidth, thus with upper and lower limits. The bandwidth is strongly dependent on secondary reactor parameter such as initial enrichment, temperature and density of the fuel and moderator, hence the reactor type, fuel element geometry and plant operation history. The purpose of our investigation is to look into the scaling and correlation limitations, to define and verify the range of validity and to scrutinize the dependencies and propagation of uncertainties that affect the waste inventory declarations and their independent verification. This is accomplished by numerical assessment and simulation of waste production using well accepted codes SCALE 6.0 and 6.1 to simulate the cooling time and burn up of a spent fuel element. The simulations are benchmarked against spent fuel from the real reactor Obrigheim in Germany for which sufficiently precise experimental reference data are available. (authors)« less
Time-Dependent Simulations of Turbopump Flows
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan; Chan, William; Williams, Robert
2002-01-01
Unsteady flow simulations for RLV (Reusable Launch Vehicles) 2nd Generation baseline turbopump for one and half impeller rotations have been completed by using a 34.3 Million grid points model. MLP (Multi-Level Parallelism) shared memory parallelism has been implemented in INS3D, and benchmarked. Code optimization for cash based platforms will be completed by the end of September 2001. Moving boundary capability is obtained by using DCF module. Scripting capability from CAD (computer aided design) geometry to solution has been developed. Data compression is applied to reduce data size in post processing. Fluid/Structure coupling has been initiated.
GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.
E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N
2018-03-01
GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.
Time Dependent Simulation of Turbopump Flows
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan; Chan, William; Williams, Robert
2001-01-01
The objective of this viewgraph presentation is to enhance incompressible flow simulation capability for developing aerospace vehicle components, especially unsteady flow phenomena associated with high speed turbo pumps. Unsteady Space Shuttle Main Engine (SSME)-rig1 1 1/2 rotations are completed for the 34.3 million grid points model. The moving boundary capability is obtained by using the DCF module. MLP shared memory parallelism has been implemented and benchmarked in INS3D. The scripting capability from CAD geometry to solution is developed. Data compression is applied to reduce data size in post processing and fluid/structure coupling is initiated.
Projection methods for line radiative transfer in spherical media.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.
An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).
NASA Astrophysics Data System (ADS)
Matveev, Vladimir S.; Miranda, Eva; Rubtsov, Vladimir; Przybylska, Maria; Tabachnikov, Sergei
2017-05-01
This Special Issue of Journal of Geometry and Physics gathers several contributions to the conference FDIS 2015 3-rd Conference on Finite Dimensional Integrable Systems in Geometry and Mathematical Physics which took place in Będlewo in July 12 to 17, 2015. It also contains other contributions by specialists in the field of integrable systems and related subjects. This is the second special issue which corresponds to the third installment of a series of Workshops called FDIS, which take place every other year.
First principles electron-correlated calculations of optical absorption in magnesium clusters★
NASA Astrophysics Data System (ADS)
Shinde, Ravindra; Shukla, Alok
2017-11-01
In this paper, we report large-scale configuration interaction (CI) calculations of linear optical absorption spectra of various isomers of magnesium clusters Mgn (n = 2-5), corresponding to valence transitions. Geometry optimization of several low-lying isomers of each cluster was carried out using coupled-cluster singles doubles (CCSD) approach, and these geometries were subsequently employed to perform ground and excited state calculations using either the full-CI (FCI) or the multi-reference singles-doubles configuration interaction (MRSDCI) approach, within the frozen-core approximation. Our calculated photoabsorption spectrum of magnesium dimer (Mg2) is in excellent agreement with the experiments both for peak positions, and intensities. Owing to the sufficiently inclusive electron-correlation effects, these results can serve as benchmarks against which future experiments, as well as calculations performed using other theoretical approaches, can be tested. Supplementary material in the form of one pdf fille available from the Journal web page at http://https://doi.org/10.1140/epjd/e2017-80356-6.
NASA Astrophysics Data System (ADS)
Hakim, Ammar; Shi, Eric; Juno, James; Bernard, Tess; Hammett, Greg
2017-10-01
For weakly collisional (or collisionless) plasmas, kinetic effects are required to capture the physics of micro-turbulence. We have implemented solvers for kinetic and gyrokinetic equations in the computational plasma physics framework, Gkeyll. We use a version of discontinuous Galerkin scheme that conserves energy exactly. Plasma sheaths are modeled with novel boundary conditions. Positivity of distribution functions is maintained via a reconstruction method, allowing robust simulations that continue to conserve energy even with positivity limiters. We have performed a large number of benchmarks, verifying the accuracy and robustness of our code. We demonstrate the application of our algorithm to two classes of problems (a) Vlasov-Maxwell simulations of turbulence in a magnetized plasma, applicable to space plasmas; (b) Gyrokinetic simulations of turbulence in open-field-line geometries, applicable to laboratory plasmas. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
Nonlinear modeling of forced magnetic reconnection in slab geometry with NIMROD
NASA Astrophysics Data System (ADS)
Beidler, M. T.; Callen, J. D.; Hegna, C. C.; Sovinec, C. R.
2017-05-01
The nonlinear, extended-magnetohydrodynamic (MHD) code NIMROD is benchmarked with the theory of time-dependent forced magnetic reconnection induced by small resonant fields in slab geometry in the context of visco-resistive MHD modeling. Linear computations agree with time-asymptotic, linear theory of flow screening of externally applied fields. The inclusion of flow in nonlinear computations can result in mode penetration due to the balance between electromagnetic and viscous forces in the time-asymptotic state, which produces bifurcations from a high-slip state to a low-slip state as the external field is slowly increased. We reproduce mode penetration and unlocking transitions by employing time-dependent externally applied magnetic fields. Mode penetration and unlocking exhibit hysteresis and occur at different magnitudes of applied field. We also establish how nonlinearly determined flow screening of the resonant field is affected by the square of the magnitude of the externally applied field. These results emphasize that the inclusion of nonlinear physics is essential for accurate prediction of the reconnected field in a flowing plasma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilkey, Lindsay
This milestone presents a demonstration of the High-to-Low (Hi2Lo) process in the VVI focus area. Validation and additional calculations with the commercial computational fluid dynamics code, STAR-CCM+, were performed using a 5x5 fuel assembly with non-mixing geometry and spacer grids. This geometry was based on the benchmark experiment provided by Westinghouse. Results from the simulations were compared to existing experimental data and to the subchannel thermal-hydraulics code COBRA-TF (CTF). An uncertainty quantification (UQ) process was developed for the STAR-CCM+ model and results of the STAR UQ were communicated to CTF. Results from STAR-CCM+ simulations were used as experimental design pointsmore » in CTF to calibrate the mixing parameter β and compared to results obtained using experimental data points. This demonstrated that CTF’s β parameter can be calibrated to match existing experimental data more closely. The Hi2Lo process for the STAR-CCM+/CTF code coupling was documented in this milestone and closely linked L3:VVI.H2LP15.01 milestone report.« less
2013-01-01
Background The objective of screening programs is to discover life threatening diseases in as many patients as early as possible and to increase the chance of survival. To be able to compare aspects of health care quality, methods are needed for benchmarking that allow comparisons on various health care levels (regional, national, and international). Objectives Applications and extensions of algorithms can be used to link the information on disease phases with relative survival rates and to consolidate them in composite measures. The application of the developed SAS-macros will give results for benchmarking of health care quality. Data examples for breast cancer care are given. Methods A reference scale (expected, E) must be defined at a time point at which all benchmark objects (observed, O) are measured. All indices are defined as O/E, whereby the extended standardized screening-index (eSSI), the standardized case-mix-index (SCI), the work-up-index (SWI), and the treatment-index (STI) address different health care aspects. The composite measures called overall-performance evaluation (OPE) and relative overall performance indices (ROPI) link the individual indices differently for cross-sectional or longitudinal analyses. Results Algorithms allow a time point and a time interval associated comparison of the benchmark objects in the indices eSSI, SCI, SWI, STI, OPE, and ROPI. Comparisons between countries, states and districts are possible. Exemplarily comparisons between two countries are made. The success of early detection and screening programs as well as clinical health care quality for breast cancer can be demonstrated while the population’s background mortality is concerned. Conclusions If external quality assurance programs and benchmark objects are based on population-based and corresponding demographic data, information of disease phase and relative survival rates can be combined to indices which offer approaches for comparative analyses between benchmark objects. Conclusions on screening programs and health care quality are possible. The macros can be transferred to other diseases if a disease-specific phase scale of prognostic value (e.g. stage) exists. PMID:23316692
Evaluating the Quantitative Capabilities of Metagenomic Analysis Software.
Kerepesi, Csaba; Grolmusz, Vince
2016-05-01
DNA sequencing technologies are applied widely and frequently today to describe metagenomes, i.e., microbial communities in environmental or clinical samples, without the need for culturing them. These technologies usually return short (100-300 base-pairs long) DNA reads, and these reads are processed by metagenomic analysis software that assign phylogenetic composition-information to the dataset. Here we evaluate three metagenomic analysis software (AmphoraNet--a webserver implementation of AMPHORA2--, MG-RAST, and MEGAN5) for their capabilities of assigning quantitative phylogenetic information for the data, describing the frequency of appearance of the microorganisms of the same taxa in the sample. The difficulties of the task arise from the fact that longer genomes produce more reads from the same organism than shorter genomes, and some software assign higher frequencies to species with longer genomes than to those with shorter ones. This phenomenon is called the "genome length bias." Dozens of complex artificial metagenome benchmarks can be found in the literature. Because of the complexity of those benchmarks, it is usually difficult to judge the resistance of a metagenomic software to this "genome length bias." Therefore, we have made a simple benchmark for the evaluation of the "taxon-counting" in a metagenomic sample: we have taken the same number of copies of three full bacterial genomes of different lengths, break them up randomly to short reads of average length of 150 bp, and mixed the reads, creating our simple benchmark. Because of its simplicity, the benchmark is not supposed to serve as a mock metagenome, but if a software fails on that simple task, it will surely fail on most real metagenomes. We applied three software for the benchmark. The ideal quantitative solution would assign the same proportion to the three bacterial taxa. We have found that AMPHORA2/AmphoraNet gave the most accurate results and the other two software were under-performers: they counted quite reliably each short read to their respective taxon, producing the typical genome length bias. The benchmark dataset is available at http://pitgroup.org/static/3RandomGenome-100kavg150bps.fna.
Rand, Hugh; Shumway, Martin; Trees, Eija K.; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E.; Defibaugh-Chavez, Stephanie; Carleton, Heather A.; Klimke, William A.; Katz, Lee S.
2017-01-01
Background As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. Methods We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and “known” phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Results Our “outbreak” benchmark datasets represent the four major foodborne bacterial pathogens (Listeria monocytogenes, Salmonella enterica, Escherichia coli, and Campylobacter jejuni) and one simulated dataset where the “known tree” can be accurately called the “true tree”. The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. Discussion These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools—we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines. PMID:29372115
Timme, Ruth E; Rand, Hugh; Shumway, Martin; Trees, Eija K; Simmons, Mustafa; Agarwala, Richa; Davis, Steven; Tillman, Glenn E; Defibaugh-Chavez, Stephanie; Carleton, Heather A; Klimke, William A; Katz, Lee S
2017-01-01
As next generation sequence technology has advanced, there have been parallel advances in genome-scale analysis programs for determining evolutionary relationships as proxies for epidemiological relationship in public health. Most new programs skip traditional steps of ortholog determination and multi-gene alignment, instead identifying variants across a set of genomes, then summarizing results in a matrix of single-nucleotide polymorphisms or alleles for standard phylogenetic analysis. However, public health authorities need to document the performance of these methods with appropriate and comprehensive datasets so they can be validated for specific purposes, e.g., outbreak surveillance. Here we propose a set of benchmark datasets to be used for comparison and validation of phylogenomic pipelines. We identified four well-documented foodborne pathogen events in which the epidemiology was concordant with routine phylogenomic analyses (reference-based SNP and wgMLST approaches). These are ideal benchmark datasets, as the trees, WGS data, and epidemiological data for each are all in agreement. We have placed these sequence data, sample metadata, and "known" phylogenetic trees in publicly-accessible databases and developed a standard descriptive spreadsheet format describing each dataset. To facilitate easy downloading of these benchmarks, we developed an automated script that uses the standard descriptive spreadsheet format. Our "outbreak" benchmark datasets represent the four major foodborne bacterial pathogens ( Listeria monocytogenes , Salmonella enterica , Escherichia coli , and Campylobacter jejuni ) and one simulated dataset where the "known tree" can be accurately called the "true tree". The downloading script and associated table files are available on GitHub: https://github.com/WGS-standards-and-analysis/datasets. These five benchmark datasets will help standardize comparison of current and future phylogenomic pipelines, and facilitate important cross-institutional collaborations. Our work is part of a global effort to provide collaborative infrastructure for sequence data and analytic tools-we welcome additional benchmark datasets in our recommended format, and, if relevant, we will add these on our GitHub site. Together, these datasets, dataset format, and the underlying GitHub infrastructure present a recommended path for worldwide standardization of phylogenomic pipelines.
An analysis of numerical convergence in discrete velocity gas dynamics for internal flows
NASA Astrophysics Data System (ADS)
Sekaran, Aarthi; Varghese, Philip; Goldstein, David
2018-07-01
The Discrete Velocity Method (DVM) for solving the Boltzmann equation has significant advantages in the modeling of non-equilibrium and near equilibrium flows as compared to other methods in terms of reduced statistical noise, faster solutions and the ability to handle transient flows. Yet the DVM performance for rarefied flow in complex, small-scale geometries, in microelectromechanical (MEMS) devices for instance, is yet to be studied in detail. The present study focuses on the performance of the DVM for locally large Knudsen number flows of argon around sharp corners and other sources for discontinuities in the distribution function. Our analysis details the nature of the solution for some benchmark cases and introduces the concept of solution convergence for the transport terms in the discrete velocity Boltzmann equation. The limiting effects of the velocity space discretization are also investigated and the constraints on obtaining a robust, consistent solution are derived. We propose techniques to maintain solution convergence and demonstrate the implementation of a specific strategy and its effect on the fidelity of the solution for some benchmark cases.
Revisiting Turbulence Model Validation for High-Mach Number Axisymmetric Compression Corner Flows
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Rumsey, Christopher L.; Huang, George P.
2015-01-01
Two axisymmetric shock-wave/boundary-layer interaction (SWBLI) cases are used to benchmark one- and two-equation Reynolds-averaged Navier-Stokes (RANS) turbulence models. This validation exercise was executed in the philosophy of the NASA Turbulence Modeling Resource and the AIAA Turbulence Model Benchmarking Working Group. Both SWBLI cases are from the experiments of Kussoy and Horstman for axisymmetric compression corner geometries with SWBLI inducing flares of 20 and 30 degrees, respectively. The freestream Mach number was approximately 7. The RANS closures examined are the Spalart-Allmaras one-equation model and the Menter family of kappa - omega two equation models including the Baseline and Shear Stress Transport formulations. The Wind-US and CFL3D RANS solvers are employed to simulate the SWBLI cases. Comparisons of RANS solutions to experimental data are made for a boundary layer survey plane just upstream of the SWBLI region. In the SWBLI region, comparisons of surface pressure and heat transfer are made. The effects of inflow modeling strategy, grid resolution, grid orthogonality, turbulent Prandtl number, and code-to-code variations are also addressed.
NASA Astrophysics Data System (ADS)
Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela
2016-03-01
The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin; Anand, Nk
2016-03-30
A 1/16th scaled VHTR experimental model was constructed and the preliminary test was performed in this study. To produce benchmark data for CFD validation in the future, the facility was first run at partial operation with five pipes being heated. PIV was performed to extract the vector velocity field for three adjacent naturally convective jets at statistically steady state. A small recirculation zone was found between the pipes, and the jets entered the merging zone at 3 cm from the pipe outlet but diverged as the flow approached the top of the test geometry. Turbulence analysis shows the turbulence intensitymore » peaked at 41-45% as the jets mixed. A sensitivity analysis confirmed that 1000 frames were sufficient to measure statistically steady state. The results were then validated by extracting the flow rate from the PIV jet velocity profile, and comparing it with an analytic flow rate and ultrasonic flowmeter; all flow rates lie within the uncertainty of the other two methods for Tests 1 and 2. This test facility can be used for further analysis of naturally convective mixing, and eventually produce benchmark data for CFD validation for the VHTR during a PCC or DCC accident scenario. Next, a PTV study of 3000 images (1500 image pairs) were used to quantify the velocity field in the upper plenum. A sensitivity analysis confirmed that 1500 frames were sufficient to precisely estimate the flow. Subsequently, three (3, 9, and 15 cm) Y-lines from the pipe output were extracted to consider the output differences between 50 to 1500 frames. The average velocity field and standard deviation error that accrued in the three different tests were calculated to assess repeatability. The error was varied, from 1 to 14%, depending on Y-elevation. The error decreased as the flow moved farther from the output pipe. In addition, turbulent intensity was calculated and found to be high near the output. Reynolds stresses and turbulent intensity were used to validate the data by comparing it with benchmark data. The experimental data gave the same pattern as the benchmark data. A turbulent single buoyant jet study was performed for the case of LOFC in the upper plenum of scaled VHTR. Time-averaged profiles show that 3,000 frames of images were sufficient for the study up to second-order statistics. Self-similarity is an important feature of jets since the behavior of jets is independent of Reynolds number and a sole function of geometry. Self-similarity profiles were well observed in the axial velocity and velocity magnitude profile regardless of z/D where the radial velocity did not show any similarity pattern. The normal components of Reynolds stresses have self-similarity within the expected range. The study shows that large vortices were observed close to the dome wall, indicating that the geometry of the VHTR has a significant impact on its safety and performance. Near the dome surface, large vortices were shown to inhibit the flows, resulting in reduced axial jet velocity. The vortices that develop subsequently reduce the Reynolds stresses that develop and the impact on the integrity of the VHTR upper plenum surface. Multiple jets study, including two, three and five jets, were investigated.« less
NASA Astrophysics Data System (ADS)
Vitório, Paulo Cezar; Leonel, Edson Denner
2017-12-01
The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.
Pulling PreK into a K-12 Orbit: The Evolution of PreK in the Age of Standards
ERIC Educational Resources Information Center
Graue, M. Elizabeth; Ryan, Sharon; Nocera, Amato; Northey, Kaitlin; Wilinski, Bethany
2017-01-01
We might call this decade the era of early childhood. In the US, federal and state governments invest in the creation of public pre-kindergarten (preK) programs and create standards that articulate goals for practice and benchmarks that can be used to evaluate success. How have these trends provided a context for the evolution of preK curriculum?…
Group Counseling Optimization: A Novel Approach
NASA Astrophysics Data System (ADS)
Eita, M. A.; Fahmy, M. M.
A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.
2015-04-29
in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software
Investigating emergency room service quality using lean manufacturing.
Abdelhadi, Abdelhakim
2015-01-01
The purpose of this paper is to investigate a lean manufacturing metric called Takt time as a benchmark evaluation measure to evaluate a public hospital's service quality. Lean manufacturing is an established managerial philosophy with a proven track record in industry. A lean metric called Takt time is applied as a measure to compare the relative efficiency between two emergency departments (EDs) belonging to the same public hospital. Outcomes guide managers to improve patient services and increase hospital performances. The patient treatment lead time within the hospital's two EDs (one department serves male and the other female patients) are the study's focus. A lean metric called Takt time is used to find the service's relative efficiency. Findings show that the lean manufacturing metric called Takt time can be used as an effective way to measure service efficiency by analyzing relative efficiency and identifies bottlenecks in different departments providing the same services. The paper presents a new procedure to compare relative efficiency between two EDs. It can be applied to any healthcare facility.
Geometry optimization for micro-pressure sensor considering dynamic interference
NASA Astrophysics Data System (ADS)
Yu, Zhongliang; Zhao, Yulong; Li, Lili; Tian, Bian; Li, Cun
2014-09-01
Presented is the geometry optimization for piezoresistive absolute micro-pressure sensor. A figure of merit called the performance factor (PF) is defined as a quantitative index to describe the comprehensive performances of a sensor including sensitivity, resonant frequency, and acceleration interference. Three geometries are proposed through introducing islands and sensitive beams into typical flat diaphragm. The stress distributions of sensitive elements are analyzed by finite element method. Multivariate fittings based on ANSYS simulation results are performed to establish the equations about surface stress, deflection, and resonant frequency. Optimization by MATLAB is carried out to determine the dimensions of the geometries. Convex corner undercutting is evaluated. Each PF of the three geometries with the determined dimensions is calculated and compared. Silicon bulk micromachining is utilized to fabricate the prototypes of the sensors. The outputs of the sensors under both static and dynamic conditions are tested. Experimental results demonstrate the rationality of the defined performance factor and reveal that the geometry with quad islands presents the highest PF of 210.947 Hz1/4. The favorable overall performances enable the sensor more suitable for altimetry.
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
S66: A Well-balanced Database of Benchmark Interaction Energies Relevant to Biomolecular Structures
2011-01-01
With numerous new quantum chemistry methods being developed in recent years and the promise of even more new methods to be developed in the near future, it is clearly critical that highly accurate, well-balanced, reference data for many different atomic and molecular properties be available for the parametrization and validation of these methods. One area of research that is of particular importance in many areas of chemistry, biology, and material science is the study of noncovalent interactions. Because these interactions are often strongly influenced by correlation effects, it is necessary to use computationally expensive high-order wave function methods to describe them accurately. Here, we present a large new database of interaction energies calculated using an accurate CCSD(T)/CBS scheme. Data are presented for 66 molecular complexes, at their reference equilibrium geometries and at 8 points systematically exploring their dissociation curves; in total, the database contains 594 points: 66 at equilibrium geometries, and 528 in dissociation curves. The data set is designed to cover the most common types of noncovalent interactions in biomolecules, while keeping a balanced representation of dispersion and electrostatic contributions. The data set is therefore well suited for testing and development of methods applicable to bioorganic systems. In addition to the benchmark CCSD(T) results, we also provide decompositions of the interaction energies by means of DFT-SAPT calculations. The data set was used to test several correlated QM methods, including those parametrized specifically for noncovalent interactions. Among these, the SCS-MI-CCSD method outperforms all other tested methods, with a root-mean-square error of 0.08 kcal/mol for the S66 data set. PMID:21836824
Software Geometry in Simulations
NASA Astrophysics Data System (ADS)
Alion, Tyler; Viren, Brett; Junk, Tom
2015-04-01
The Long Baseline Neutrino Experiment (LBNE) involves many detectors. The experiment's near detector (ND) facility, may ultimately involve several detectors. The far detector (FD) will be significantly larger than any other Liquid Argon (LAr) detector yet constructed; many prototype detectors are being constructed and studied to motivate a plethora of proposed FD designs. Whether it be a constructed prototype or a proposed ND/FD design, every design must be simulated and analyzed. This presents a considerable challenge to LBNE software experts; each detector geometry must be described to the simulation software in an efficient way which allows for multiple authors to easily collaborate. Furthermore, different geometry versions must be tracked throughout their use. We present a framework called General Geometry Description (GGD), written and developed by LBNE software collaborators for managing software to generate geometries. Though GGD is flexible enough to be used by any experiment working with detectors, we present it's first use in generating Geometry Description Markup Language (GDML) files to interface with LArSoft, a framework of detector simulations, event reconstruction, and data analyses written for all LAr technology users at Fermilab. Brett is the other of the framework discussed here, the General Geometry Description (GGD).
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Chacón, L.; Cappello, S.
2010-08-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacón, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code in cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonfiglio, Daniele; Chacon, Luis; Cappello, Susanna
2010-01-01
With the increasing impact of scientific discovery via advanced computation, there is presently a strong emphasis on ensuring the mathematical correctness of computational simulation tools. Such endeavor, termed verification, is now at the center of most serious code development efforts. In this study, we address a cross-benchmark nonlinear verification study between two three-dimensional magnetohydrodynamics (3D MHD) codes for fluid modeling of fusion plasmas, SPECYL [S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996)] and PIXIE3D [L. Chacon, Phys. Plasmas 15, 056103 (2008)], in their common limit of application: the simple viscoresistive cylindrical approximation. SPECYL is a serial code inmore » cylindrical geometry that features a spectral formulation in space and a semi-implicit temporal advance, and has been used extensively to date for reversed-field pinch studies. PIXIE3D is a massively parallel code in arbitrary curvilinear geometry that features a conservative, solenoidal finite-volume discretization in space, and a fully implicit temporal advance. The present study is, in our view, a first mandatory step in assessing the potential of any numerical 3D MHD code for fluid modeling of fusion plasmas. Excellent agreement is demonstrated over a wide range of parameters for several fusion-relevant cases in both two- and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
Clark, Neil R.; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D.; Jones, Matthew R.; Ma’ayan, Avi
2016-01-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community. PMID:26848405
Clark, Neil R; Szymkiewicz, Maciej; Wang, Zichen; Monteiro, Caroline D; Jones, Matthew R; Ma'ayan, Avi
2015-11-01
Gene set analysis of differential expression, which identifies collectively differentially expressed gene sets, has become an important tool for biology. The power of this approach lies in its reduction of the dimensionality of the statistical problem and its incorporation of biological interpretation by construction. Many approaches to gene set analysis have been proposed, but benchmarking their performance in the setting of real biological data is difficult due to the lack of a gold standard. In a previously published work we proposed a geometrical approach to differential expression which performed highly in benchmarking tests and compared well to the most popular methods of differential gene expression. As reported, this approach has a natural extension to gene set analysis which we call Principal Angle Enrichment Analysis (PAEA). PAEA employs dimensionality reduction and a multivariate approach for gene set enrichment analysis. However, the performance of this method has not been assessed nor its implementation as a web-based tool. Here we describe new benchmarking protocols for gene set analysis methods and find that PAEA performs highly. The PAEA method is implemented as a user-friendly web-based tool, which contains 70 gene set libraries and is freely available to the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
2015/2016 Quality Risk Management Benchmarking Survey.
Waldron, Kelly; Ramnarine, Emma; Hartman, Jeffrey
2017-01-01
This paper investigates the concept of quality risk management (QRM) maturity as it applies to the pharmaceutical and biopharmaceutical industries, using the results and analysis from a QRM benchmarking survey conducted in 2015 and 2016. QRM maturity can be defined as the effectiveness and efficiency of a quality risk management program, moving beyond "check-the-box" compliance with guidelines such as ICH Q9 Quality Risk Management , to explore the value QRM brings to business and quality operations. While significant progress has been made towards full adoption of QRM principles and practices across industry, the full benefits of QRM have not yet been fully realized. The results of the QRM Benchmarking Survey indicate that the pharmaceutical and biopharmaceutical industries are approximately halfway along the journey towards full QRM maturity. LAY ABSTRACT: The management of risks associated with medicinal product quality and patient safety are an important focus for the pharmaceutical and biopharmaceutical industries. These risks are identified, analyzed, and controlled through a defined process called quality risk management (QRM), which seeks to protect the patient from potential quality-related risks. This paper summarizes the outcomes of a comprehensive survey of industry practitioners performed in 2015 and 2016 that aimed to benchmark the level of maturity with regard to the application of QRM. The survey results and subsequent analysis revealed that the pharmaceutical and biopharmaceutical industries have made significant progress in the management of quality risks over the last ten years, and they are roughly halfway towards reaching full maturity of QRM. © PDA, Inc. 2017.
Kang, Guangliang; Du, Li; Zhang, Hong
2016-06-22
The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.
NASA Technical Reports Server (NTRS)
Haimes, Robert; Follen, Gregory J.
1998-01-01
CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.
The unassigned distance geometry problem
Duxbury, P. M.; Granlund, L.; Gujarathi, S. R.; ...
2015-11-19
Studies of distance geometry problems (DGP) have focused on cases where the vertices at the ends of all or most of the given distances are known or assigned, which we call assigned distance geometry problems (aDGPs). In this contribution we consider the unassigned distance geometry problem (uDGP) where the vertices associated with a given distance are unknown, so the graph structure has to be discovered. uDGPs arises when attempting to find the atomic structure of molecules and nanoparticles using X-ray or neutron diffraction data from non-crystalline materials. Rigidity theory provides a useful foundation for both aDGPs and uDGPs, though itmore » is restricted to generic realizations of graphs, and key results are summarized. Conditions for unique realization are discussed for aDGP and uDGP cases, build-up algorithms for both cases are described and experimental results for uDGP are presented.« less
Analogies between Kirchhoff plates and functionally graded Saint-Venant beams under torsion
NASA Astrophysics Data System (ADS)
Barretta, Raffaele; Luciano, Raimondo
2015-05-01
Exact solutions of elastic Kirchhoff plates are available only for special geometries, loadings and kinematic boundary constraints. An effective solution procedure, based on an analogy between functionally graded orthotropic Saint-Venant beams under torsion and inhomogeneous isotropic Kirchhoff plates, with no kinematic boundary constraints, is proposed. The result extends the one contributed in Barretta (Acta Mech 224(12):2955-2964, 2013) for the special case of homogeneous Saint-Venant beams under torsion. Closed-form solutions for displacement, bending-twisting moment and curvature fields of an elliptic plate, corresponding to a functionally graded orthotropic beam, are evaluated. A new benchmark for computational mechanics is thus provided.
Application of Chimera Grid Scheme to Combustor Flowfields at all Speeds
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Chen, Kuo-Huey
1997-01-01
A CFD method for solving combustor flowfields at all speeds on complex configurations is presented. The approach is based on the ALLSPD-3D code which uses the compressible formulation of the flow equations including real gas effects, nonequilibrium chemistry and spray combustion. To facilitate the analysis of complex geometries, the chimera grid method is utilized. To the best of our knowledge, this is the first application of the chimera scheme to reacting flows. In order to evaluate the effectiveness of this numerical approach, several benchmark calculations of subsonic flows are presented. These include steady and unsteady flows, and bluff-body stabilized spray and premixed combustion flames.
Small-amplitude acoustics in bulk granular media
NASA Astrophysics Data System (ADS)
Henann, David L.; Valenza, John J., II; Johnson, David L.; Kamrin, Ken
2013-10-01
We propose and validate a three-dimensional continuum modeling approach that predicts small-amplitude acoustic behavior of dense-packed granular media. The model is obtained through a joint experimental and finite-element study focused on the benchmark example of a vibrated container of grains. Using a three-parameter linear viscoelastic constitutive relation, our continuum model is shown to quantitatively predict the effective mass spectra in this geometry, even as geometric parameters for the environment are varied. Further, the model's predictions for the surface displacement field are validated mode-by-mode against experiment. A primary observation is the importance of the boundary condition between grains and the quasirigid walls.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing
Alioto, Tyler S.; Buchhalter, Ivo; Derdak, Sophia; Hutter, Barbara; Eldridge, Matthew D.; Hovig, Eivind; Heisler, Lawrence E.; Beck, Timothy A.; Simpson, Jared T.; Tonon, Laurie; Sertier, Anne-Sophie; Patch, Ann-Marie; Jäger, Natalie; Ginsbach, Philip; Drews, Ruben; Paramasivam, Nagarajan; Kabbe, Rolf; Chotewutmontri, Sasithorn; Diessl, Nicolle; Previti, Christopher; Schmidt, Sabine; Brors, Benedikt; Feuerbach, Lars; Heinold, Michael; Gröbner, Susanne; Korshunov, Andrey; Tarpey, Patrick S.; Butler, Adam P.; Hinton, Jonathan; Jones, David; Menzies, Andrew; Raine, Keiran; Shepherd, Rebecca; Stebbings, Lucy; Teague, Jon W.; Ribeca, Paolo; Giner, Francesc Castro; Beltran, Sergi; Raineri, Emanuele; Dabad, Marc; Heath, Simon C.; Gut, Marta; Denroche, Robert E.; Harding, Nicholas J.; Yamaguchi, Takafumi N.; Fujimoto, Akihiro; Nakagawa, Hidewaki; Quesada, Víctor; Valdés-Mas, Rafael; Nakken, Sigve; Vodák, Daniel; Bower, Lawrence; Lynch, Andrew G.; Anderson, Charlotte L.; Waddell, Nicola; Pearson, John V.; Grimmond, Sean M.; Peto, Myron; Spellman, Paul; He, Minghui; Kandoth, Cyriac; Lee, Semin; Zhang, John; Létourneau, Louis; Ma, Singer; Seth, Sahil; Torrents, David; Xi, Liu; Wheeler, David A.; López-Otín, Carlos; Campo, Elías; Campbell, Peter J.; Boutros, Paul C.; Puente, Xose S.; Gerhard, Daniela S.; Pfister, Stefan M.; McPherson, John D.; Hudson, Thomas J.; Schlesner, Matthias; Lichter, Peter; Eils, Roland; Jones, David T. W.; Gut, Ivo G.
2015-01-01
As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to ∼100 × shows benefits, as long as the tumour:control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy. PMID:26647970
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele
2014-11-11
has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less
Physical properties of the benchmark models program supercritical wing
NASA Technical Reports Server (NTRS)
Dansberry, Bryan E.; Durham, Michael H.; Bennett, Robert M.; Turnock, David L.; Silva, Walter A.; Rivera, Jose A., Jr.
1993-01-01
The goal of the Benchmark Models Program is to provide data useful in the development and evaluation of aeroelastic computational fluid dynamics (CFD) codes. To that end, a series of three similar wing models are being flutter tested in the Langley Transonic Dynamics Tunnel. These models are designed to simultaneously acquire model response data and unsteady surface pressure data during wing flutter conditions. The supercritical wing is the second model of this series. It is a rigid semispan model with a rectangular planform and a NASA SC(2)-0414 supercritical airfoil shape. The supercritical wing model was flutter tested on a flexible mount, called the Pitch and Plunge Apparatus, that provides a well-defined, two-degree-of-freedom dynamic system. The supercritical wing model and associated flutter test apparatus is described and experimentally determined wind-off structural dynamic characteristics of the combined rigid model and flexible mount system are included.
Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.
1998-03-01
The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less
Geometry of spin coherent states
NASA Astrophysics Data System (ADS)
Chryssomalakos, C.; Guzmán-González, E.; Serrano-Ensástiga, E.
2018-04-01
Spin states of maximal projection along some direction in space are called (spin) coherent, and are, in many respects, the ‘most classical’ available. For any spin s, the spin coherent states form a 2-sphere in the projective Hilbert space \
Stochastic geometry in disordered systems, applications to quantum Hall transitions
NASA Astrophysics Data System (ADS)
Gruzberg, Ilya
2012-02-01
A spectacular success in the study of random fractal clusters and their boundaries in statistical mechanics systems at or near criticality using Schramm-Loewner Evolutions (SLE) naturally calls for extensions in various directions. Can this success be repeated for disordered and/or non-equilibrium systems? Naively, when one thinks about disordered systems and their average correlation functions one of the very basic assumptions of SLE, the so called domain Markov property, is lost. Also, in some lattice models of Anderson transitions (the network models) there are no natural clusters to consider. Nevertheless, in this talk I will argue that one can apply the so called conformal restriction, a notion of stochastic conformal geometry closely related to SLE, to study the integer quantum Hall transition and its variants. I will focus on the Chalker-Coddington network model and will demonstrate that its average transport properties can be mapped to a classical problem where the basic objects are geometric shapes (loosely speaking, the current paths) that obey an important restriction property. At the transition point this allows to use the theory of conformal restriction to derive exact expressions for point contact conductances in the presence of various non-trivial boundary conditions.
Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P
2013-02-01
Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 μm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Three-dimensional echocardiography was used to obtain systolic leaflet geometry. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet (V ~ 0.6 m/s) was observed during peak systole with minimal out-of-plane velocities. In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, this work represents the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations.
GRILLIX: a 3D turbulence code based on the flux-coordinate independent approach
NASA Astrophysics Data System (ADS)
Stegmeir, Andreas; Coster, David; Ross, Alexander; Maj, Omar; Lackner, Karl; Poli, Emanuele
2018-03-01
The GRILLIX code is presented with which plasma turbulence/transport in various geometries can be simulated in 3D. The distinguishing feature of the code is that it is based on the flux-coordinate independent approach (FCI) (Hariri and Ottaviani 2013 Comput. Phys. Commun. 184 2419; Stegmeir et al 2016 Comput. Phys. Commun. 198 139). Cylindrical or Cartesian grids are used on which perpendicular operators are discretised via standard finite difference methods and parallel operators via a field line tracing and interpolation procedure (field line map). This offers a very high flexibility with respect to geometry, especially a separatrix with X-point(s) or a magnetic axis can be treated easily in contrast to approaches which are based on field aligned coordinates and suffer from coordinate singularities. Aiming finally for simulation of edge and scrape-off layer (SOL) turbulence, an isothermal electrostatic drift-reduced Braginskii model (Zeiler et al 1997 Phys. Plasmas 4 2134) has been implemented in GRILLIX. We present the numerical approach, which is based on a toroidally staggered formulation of the FCI, we show verification of the code with the method of manufactured solutions and show a benchmark based on a TORPEX blob experiment, previously performed by several edge/SOL codes (Riva et al 2016 Plasma Phys. Control. Fusion 58 044005). Examples for slab, circular, limiter and diverted geometry are presented. Finally, the results show that the FCI approach in general and GRILLIX in particular are viable approaches in order to tackle simulation of edge/SOL turbulence in diverted geometry.
Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Yoganathan, Ajit P.
2012-01-01
Numerical models of the mitral valve have been used to elucidate mitral valve function and mechanics. These models have evolved from simple two-dimensional approximations to complex three-dimensional fully coupled fluid structure interaction models. However, to date these models lack direct one-to-one experimental validation. As computational solvers vary considerably, experimental benchmark data are critically important to ensure model accuracy. In this study, a novel left heart simulator was designed specifically for the validation of numerical mitral valve models. Several distinct experimental techniques were collectively performed to resolve mitral valve geometry and hemodynamics. In particular, micro-computed tomography was used to obtain accurate and high-resolution (39 µm voxel) native valvular anatomy, which included the mitral leaflets, chordae tendinae, and papillary muscles. Threedimensional echocardiography was used to obtain systolic leaflet geometry for direct comparison of resultant leaflet kinematics. Stereoscopic digital particle image velocimetry provided all three components of fluid velocity through the mitral valve, resolved every 25 ms in the cardiac cycle. A strong central filling jet was observed during peak systole, with minimal out-of-plane velocities (V~0.6m/s). In addition, physiologic hemodynamic boundary conditions were defined and all data were synchronously acquired through a central trigger. Finally, the simulator is a precisely controlled environment, in which flow conditions and geometry can be systematically prescribed and resultant valvular function and hemodynamics assessed. Thus, these data represent the first comprehensive database of high fidelity experimental data, critical for extensive validation of mitral valve fluid structure interaction simulations. PMID:22965640
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M; Randerson, James T; Thornton, Peter E
2009-12-01
The need to capture important climate feedbacks in general circulation models (GCMs) has resulted in efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, called Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results (Friedlingstein et al., 2006). This work suggests that a more rigorous set of global offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are needed. The Carbon-Land Model Intercomparison Projectmore » (C-LAMP) was designed to meet this need by providing a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). Recently, a similar effort in Europe, called the International Land Model Benchmark (ILAMB) Project, was begun to assess the performance of European land surface models. These two projects will now serve as prototypes for a proposed international land-biosphere model benchmarking activity for those models participating in the IPCC Fifth Assessment Report (AR5). Initially used for model validation for terrestrial biogeochemistry models in the NCAR Community Land Model (CLM), C-LAMP incorporates a simulation protocol for both offline and partially coupled simulations using a prescribed historical trajectory of atmospheric CO2 concentrations. Models are confronted with data through comparisons against AmeriFlux site measurements, MODIS satellite observations, NOAA Globalview flask records, TRANSCOM inversions, and Free Air CO2 Enrichment (FACE) site measurements. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the CLM version 3 in the Community Climate System Model version 3 (CCSM3): the CASA model of Fung, et al., and the carbon-nitrogen (CN) model of Thornton. Comparisons of the CLM3 offline results against observational datasets have been performed and are described in Randerson et al. (2009). CLM version 4 has been evaluated using C-LAMP, showing improvement in many of the metrics. Efforts are now underway to initiate a Nitrogen-Land Model Intercomparison Project (N-LAMP) to better constrain the effects of the nitrogen cycle in biosphere models. Presented will be new results from C-LAMP for CLM4, initial N-LAMP developments, and the proposed land-biosphere model benchmarking activity.« less
NASA Astrophysics Data System (ADS)
Cai, Lei; Yuan, Wei; Zhang, Zhou; He, Lin; Chou, Kuo-Chen
2016-11-01
Four popular somatic single nucleotide variant (SNV) calling methods (Varscan, SomaticSniper, Strelka and MuTect2) were carefully evaluated on the real whole exome sequencing (WES, depth of ~50X) and ultra-deep targeted sequencing (UDT-Seq, depth of ~370X) data. The four tools returned poor consensus on candidates (only 20% of calls were with multiple hits by the callers). For both WES and UDT-Seq, MuTect2 and Strelka obtained the largest proportion of COSMIC entries as well as the lowest rate of dbSNP presence and high-alternative-alleles-in-control calls, demonstrating their superior sensitivity and accuracy. Combining different callers does increase reliability of candidates, but narrows the list down to very limited range of tumor read depth and variant allele frequency. Calling SNV on UDT-Seq data, which were of much higher read-depth, discovered additional true-positive variations, despite an even more tremendous growth in false positive predictions. Our findings not only provide valuable benchmark for state-of-the-art SNV calling methods, but also shed light on the access to more accurate SNV identification in the future.
Multi-Disciplinary, Multi-Fidelity Discrete Data Transfer Using Degenerate Geometry Forms
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
In a typical multi-fidelity design process, different levels of geometric abstraction are used for different analysis methods, and transitioning from one phase of design to the next often requires a complete re-creation of the geometry. To maintain consistency between lower-order and higher-order analysis results, Vehicle Sketch Pad (OpenVSP) recently introduced the ability to generate and export several degenerate forms of the geometry, representing the type of abstraction required to perform low- to medium-order analysis for a range of aeronautical disciplines. In this research, the functionality of these degenerate models was extended, so that in addition to serving as repositories for the geometric information that is required as input to an analysis, the degenerate models can also store the results of that analysis mapped back onto the geometric nodes. At the same time, the results are also mapped indirectly onto the nodes of lower-order degenerate models using a process called aggregation, and onto higher-order models using a process called disaggregation. The mapped analysis results are available for use by any subsequent analysis in an integrated design and analysis process. A simple multi-fidelity analysis process for a single-aisle subsonic transport aircraft is used as an example case to demonstrate the value of the approach.
Nowell, Lisa H.; Ludtke, Amy S.; Mueller, David K.; Scott, Jonathon C.
2012-01-01
Beach water and sediment samples were collected along the Gulf of Mexico coast to assess differences in contaminant concentrations before and after landfall of Macondo-1 well oil released into the Gulf of Mexico from the sinking of the British Petroleum Corporation's Deepwater Horizon drilling platform. Samples were collected at 70 coastal sites between May 7 and July 7, 2010, to document baseline, or "pre-landfall" conditions. A subset of 48 sites was resampled during October 4 to 14, 2010, after oil had made landfall on the Gulf of Mexico coast, called the "post-landfall" sampling period, to determine if actionable concentrations of oil were present along shorelines. Few organic contaminants were detected in water; their detection frequencies generally were low and similar in pre-landfall and post-landfall samples. Only one organic contaminant--toluene--had significantly higher concentrations in post-landfall than pre-landfall water samples. No water samples exceeded any human-health benchmarks, and only one post-landfall water sample exceeded an aquatic-life benchmark--the toxic-unit benchmark for polycyclic aromatic hydrocarbons (PAH) mixtures. In sediment, concentrations of 3 parent PAHs and 17 alkylated PAH groups were significantly higher in post-landfall samples than pre-landfall samples. One pre-landfall sample from Texas exceeded the sediment toxic-unit benchmark for PAH mixtures; this site was not sampled during the post-landfall period. Empirical upper screening-value benchmarks for PAHs in sediment were exceeded at 37 percent of post-landfall samples and 22 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Seven sites had the largest concentration differences between post-landfall and pre-landfall samples for 15 alkylated PAHs. Five of these seven sites, located in Louisiana, Mississippi, and Alabama, had diagnostic geochemical evidence of Macondo-1 oil in post-landfall sediments and tarballs. For trace and major elements in water, analytical reporting levels for several elements were high and variable. No human-health benchmarks were exceeded, although these were available for only two elements. Aquatic-life benchmarks for trace elements were exceeded in 47 percent of water samples overall. The elements responsible for the most exceedances in post-landfall samples were boron, copper, and manganese. Benchmark exceedances in water could be substantially underestimated because some samples had reporting levels higher than the applicable benchmarks (such as cobalt, copper, lead and zinc) and some elements (such as boron and vanadium) were analyzed in samples from only one sampling period. For trace elements in whole sediment, empirical upper screening-value benchmarks were exceeded in 57 percent of post-landfall samples and 40 percent of pre-landfall samples, but there was no significant difference in the proportion of samples exceeding benchmarks between paired pre-landfall and post-landfall samples. Benchmark exceedance frequencies could be conservatively high because they are based on measurements of total trace-element concentrations in sediment. In the less than 63-micrometer sediment fraction, one or more trace or major elements were anthropogenically enriched relative to national baseline values for U.S. streams for all sediment samples except one. Sixteen percent of sediment samples exceeded upper screening-value benchmarks for, and were enriched in, one or more of the following elements: barium, vanadium, aluminum, manganese, arsenic, chromium, and cobalt. These samples were evenly divided between the sampling periods. Aquatic-life benchmarks were frequently exceeded along the Gulf of Mexico coast by trace elements in both water and sediment and by PAHs in sediment. For the most part, however, significant differences between pre-landfall and post-landfall samples were limited to concentrations of PAHs in sediment. At five sites along the coast, the higher post-landfall concentrations of PAHs were associated with diagnostic geochemical evidence of Deepwater Horizon Macondo-1 oil.
Azzini, Elena; Maiani, Giuseppe; Turrini, Aida; Intorre, Federica; Lo Feudo, Gabriella; Capone, Roberto; Bottalico, Francesco; El Bilali, Hamid; Polito, Angela
2018-08-01
The aim of this paper is to provide a methodological approach to evaluate the nutritional sustainability of typical agro-food products, representing Mediterranean eating habits and included in the Mediterranean food pyramid. For each group of foods, suitable and easily measurable indicators were identified. Two macro-indicators were used to assess the nutritional sustainability of each product. The first macro-indicator, called 'business distinctiveness', takes into account the application of different regulations and standards regarding quality, safety and traceability as well as the origin of raw materials. The second macro-indicator, called 'nutritional quality', assesses product nutritional quality taking into account the contents of key compounds including micronutrients and bioactive phytochemicals. For each indicator a 0-10 scoring system was set up, with scores from 0 (unsustainable) to 10 (very sustainable), with 5 as a sustainability benchmark value. The benchmark value is the value from which a product can be considered sustainable. A simple formula was developed to produce a sustainability index. The proposed sustainability index could be considered a useful tool to describe both the qualitative and quantitative value of micronutrients and bioactive phytochemical present in foodstuffs. This methodological approach can also be applied beyond the Mediterranean, to food products in other world regions. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
Efficacy of print advertising for a bipolar disorder study.
Roy, Sumeet; Patel, Shilpa; Sheehan, Kathy Harnett; Janavs, Juris; Sheehan, David
2008-01-01
Using quality and quantity of responses, this study evaluated the efficacy of advertising in print media for a bipolar disorder clinical trial. We analyzed the corresponding quantity of calls generated and the cost per patient yield at each of the standard study milestones, spending on print advertising. Quality of calls was based on the attrition rates at each of the study stages. Five hundred and twenty-six calls were received over a 28-month period (February 2004 to June 2006). The largest number of calls were received early in the week (Monday, 25.6%; Tuesday, 23.5%; Wednesday, 18.8%), corresponding with increased advertisements toward the end of the previous week. From the total calls received, 17.14% were eligible for a screening visit, 10.28% were randomized, 9.14% were evaluable, and 7% completed the 8-week study. Sixty percent of the subjects who attended the screening visit were randomized and 68.5% of the randomized subjects completed the study. The yield from phone calls responding to print advertising for a bipolar disorder study was 7%, costing $1,330 per evaluable subject and $1,725 per completed subject. These figures may be useful benchmarks in recruitment planning and budgeting for participants in clinical trials on bipolar disorder.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Mu, John C.; Tootoonchi Afshar, Pegah; Mohiyuddin, Marghoob; Chen, Xi; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B.; Wong, Wing H.; Lam, Hugo Y. K.
2015-01-01
A high-confidence, comprehensive human variant set is critical in assessing accuracy of sequencing algorithms, which are crucial in precision medicine based on high-throughput sequencing. Although recent works have attempted to provide such a resource, they still do not encompass all major types of variants including structural variants (SVs). Thus, we leveraged the massive high-quality Sanger sequences from the HuRef genome to construct by far the most comprehensive gold set of a single individual, which was cross validated with deep Illumina sequencing, population datasets, and well-established algorithms. It was a necessary effort to completely reanalyze the HuRef genome as its previously published variants were mostly reported five years ago, suffering from compatibility, organization, and accuracy issues that prevent their direct use in benchmarking. Our extensive analysis and validation resulted in a gold set with high specificity and sensitivity. In contrast to the current gold sets of the NA12878 or HS1011 genomes, our gold set is the first that includes small variants, deletion SVs and insertion SVs up to a hundred thousand base-pairs. We demonstrate the utility of our HuRef gold set to benchmark several published SV detection tools. PMID:26412485
Brandenburg, Marcus; Hahn, Gerd J
2018-06-01
Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.
Summary of ORSphere critical and reactor physics measurements
NASA Astrophysics Data System (ADS)
Marshall, Margaret A.; Bess, John D.
2017-09-01
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
NASA Astrophysics Data System (ADS)
Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.
2014-02-01
Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.
Development of the 3DHZETRN code for space radiation protection
NASA Astrophysics Data System (ADS)
Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert
Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...
2017-05-01
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
High Fidelity BWR Fuel Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Su Jong
This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fractionmore » and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.« less
Detailed 3D representations for object recognition and modeling.
Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad
2013-11-01
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
Simulation for analysis and control of superplastic forming. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; Aramayo, G.A.; Simunovic, S.
1996-08-01
A joint study was conducted by Oak Ridge National Laboratory (ORNL) and the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy-Lightweight Materials (DOE-LWM) Program. the purpose of the study was to assess and benchmark the current modeling capabilities with respect to accuracy of predictions and simulation time. Two modeling capabilities with respect to accuracy of predictions and simulation time. Two simulation platforms were considered in this study, which included the LS-DYNA3D code installed on ORNL`s high- performance computers and the finite element code MARC used at PNL. both ORNL and PNL performed superplastic forming (SPF) analysis on amore » standard butter-tray geometry, which was defined by PNL, to better understand the capabilities of the respective models. The specific geometry was selected and formed at PNL, and the experimental results, such as forming time and thickness at specific locations, were provided for comparisons with numerical predictions. Furthermore, comparisons between the ORNL simulation results, using elasto-plastic analysis, and PNL`s results, using rigid-plastic flow analysis, were performed.« less
NASA Astrophysics Data System (ADS)
Kaneko, Masashi; Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru
2017-11-01
The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for 99Ru and 189Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both 99Ru and 189Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of Δ R/ R, which is an important nuclear constant, for 99Ru and 189Os nuclides by using the benchmark results. The sign of the calculated Δ R/ R values is consistent with the predicted data for 99Ru and 189Os. We obtain computationally the Δ R/ R values of 99Ru and 189Os (36.2 keV) as 2.35×10-4 and -0.20×10-4, respectively, at B3LYP level for SARC basis set.
Wang, L; Lovelock, M; Chui, C S
1999-12-01
To further validate the Monte Carlo dose-calculation method [Med. Phys. 25, 867-878 (1998)] developed at the Memorial Sloan-Kettering Cancer Center, we have performed experimental verification in various inhomogeneous phantoms. The phantom geometries included simple layered slabs, a simulated bone column, a simulated missing-tissue hemisphere, and an anthropomorphic head geometry (Alderson Rando Phantom). The densities of the inhomogeneity range from 0.14 to 1.86 g/cm3, simulating both clinically relevant lunglike and bonelike materials. The data are reported as central axis depth doses, dose profiles, dose values at points of interest, such as points at the interface of two different media and in the "nasopharynx" region of the Rando head. The dosimeters used in the measurement included dosimetry film, TLD chips, and rods. The measured data were compared to that of Monte Carlo calculations for the same geometrical configurations. In the case of the Rando head phantom, a CT scan of the phantom was used to define the calculation geometry and to locate the points of interest. The agreement between the calculation and measurement is generally within 2.5%. This work validates the accuracy of the Monte Carlo method. While Monte Carlo, at present, is still too slow for routine treatment planning, it can be used as a benchmark against which other dose calculation methods can be compared.
Einstein-Cartan calculus for exceptional geometry
NASA Astrophysics Data System (ADS)
Godazgar, Hadi; Godazgar, Mahdi; Nicolai, Hermann
2014-06-01
In this paper we establish and clarify the link between the recently found E7(7) generalised geometric structures, which are based on the SU(8) invariant reformulation of D = 11 supergravity proposed long ago, and newer results obtained in the framework of recent approaches to generalised geometry, where E7(7) duality is built in and manifest from the outset. In making this connection, the so-called generalised vielbein postulate plays a key role. We explicitly show how this postulate can be used to define an E7(7) valued affine connection and an associated covariant derivative, which yields a generalised curvature tensor for the E7(7) based exceptional geometry. The analysis of the generalised vielbein postulate also provides a natural explanation for the emergence of the embedding tensor from higher dimensions.
SU-D-BRD-03: A Gateway for GPU Computing in Cancer Radiotherapy Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, X; Folkerts, M; Shi, F
Purpose: Graphics Processing Unit (GPU) has become increasingly important in radiotherapy. However, it is still difficult for general clinical researchers to access GPU codes developed by other researchers, and for developers to objectively benchmark their codes. Moreover, it is quite often to see repeated efforts spent on developing low-quality GPU codes. The goal of this project is to establish an infrastructure for testing GPU codes, cross comparing them, and facilitating code distributions in radiotherapy community. Methods: We developed a system called Gateway for GPU Computing in Cancer Radiotherapy Research (GCR2). A number of GPU codes developed by our group andmore » other developers can be accessed via a web interface. To use the services, researchers first upload their test data or use the standard data provided by our system. Then they can select the GPU device on which the code will be executed. Our system offers all mainstream GPU hardware for code benchmarking purpose. After the code running is complete, the system automatically summarizes and displays the computing results. We also released a SDK to allow the developers to build their own algorithm implementation and submit their binary codes to the system. The submitted code is then systematically benchmarked using a variety of GPU hardware and representative data provided by our system. The developers can also compare their codes with others and generate benchmarking reports. Results: It is found that the developed system is fully functioning. Through a user-friendly web interface, researchers are able to test various GPU codes. Developers also benefit from this platform by comprehensively benchmarking their codes on various GPU platforms and representative clinical data sets. Conclusion: We have developed an open platform allowing the clinical researchers and developers to access the GPUs and GPU codes. This development will facilitate the utilization of GPU in radiation therapy field.« less
RERTR-12 Post-irradiation Examination Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, Francine; Williams, Walter; Robinson, Adam
2015-02-01
The following report contains the results and conclusions for the post irradiation examinations performed on RERTR-12 Insertion 2 experiment plates. These exams include eddy-current testing to measure oxide growth; neutron radiography for evaluating the condition of the fuel prior to sectioning and determination of fuel relocation and geometry changes; gamma scanning to provide relative measurements for burnup and indication of fuel- and fission-product relocation; profilometry to measure dimensional changes of the fuel plate; analytical chemistry to benchmark the physics burnup calculations; metallography to examine the microstructural changes in the fuel, interlayer and cladding; and microhardness testing to determine the material-propertymore » changes of the fuel and cladding.« less
Atomdroid: a computational chemistry tool for mobile platforms.
Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M
2012-04-23
We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.
Low-frequency quadrupole impedance of undulators and wigglers
Blednykh, A.; Bassi, G.; Hidaka, Y.; ...
2016-10-25
An analytical expression of the low-frequency quadrupole impedance for undulators and wigglers is derived and benchmarked against beam-based impedance measurements done at the 3 GeV NSLS-II storage ring. The adopted theoretical model, valid for an arbitrary number of electromagnetic layers with parallel geometry, allows to calculate the quadrupole impedance for arbitrary values of the magnetic permeability μ r. Here, in the comparison of the analytical results with the measurements for variable magnet gaps, two limit cases of the permeability have been studied: the case of perfect magnets (μ r → ∞), and the case in which the magnets are fullymore » saturated (μ r = 1).« less
Reconstruction of bar {p}p events in PANDA
NASA Astrophysics Data System (ADS)
Spataro, S.
2012-08-01
The PANDA experiment will study anti-proton proton and anti-proton nucleus collisions in the HESR complex of the facility FAIR, in a beam momentum range from 2 GeV jc up to 15 GeV/c. In preparation for the experiment, a software framework based on ROOT (PandaRoot) is being developed for the simulation, reconstruction and analysis of physics events, running also on a GRID infrastructure. Detailed geometry descriptions and different realistic reconstruction algorithms are implemented, currently used for the realization of the Technical Design Reports. The contribution will report about the reconstruction capabilities of the Panda spectrometer, focusing mainly on the performances of the tracking system and the results for the analysis of physics benchmark channels.
NASA Astrophysics Data System (ADS)
Klein, M.; Eifler, D.
2010-07-01
To analyse interactions between single steps of process chains, variations in material properties, especially the microstructure and the resulting mechanical properties, specimens with tension screw geometry were manufactured with five process chains. The different process chains as well as their parameters influence the near surface condition and consequently the fatigue behaviour in a characteristic manner. The cyclic deformation behaviour of these specimens can be benchmarked equivalently with conventional strain measurements as well as with high-precision temperature and electrical resistance measurements. The development of temperature-values provides substantial information on cyclic load dependent changes in the microstructure.
Turbulent shear layers in confining channels
NASA Astrophysics Data System (ADS)
Benham, Graham P.; Castrejon-Pita, Alfonso A.; Hewitt, Ian J.; Please, Colin P.; Style, Rob W.; Bird, Paul A. D.
2018-06-01
We present a simple model for the development of shear layers between parallel flows in confining channels. Such flows are important across a wide range of topics from diffusers, nozzles and ducts to urban air flow and geophysical fluid dynamics. The model approximates the flow in the shear layer as a linear profile separating uniform-velocity streams. Both the channel geometry and wall drag affect the development of the flow. The model shows good agreement with both particle image velocimetry experiments and computational turbulence modelling. The simplicity and low computational cost of the model allows it to be used for benchmark predictions and design purposes, which we demonstrate by investigating optimal pressure recovery in diffusers with non-uniform inflow.
Noncommutative geometry and arithmetics
NASA Astrophysics Data System (ADS)
Almeida, P.
2009-09-01
We intend to illustrate how the methods of noncommutative geometry are currently used to tackle problems in class field theory. Noncommutative geometry enables one to think geometrically in situations in which the classical notion of space formed of points is no longer adequate, and thus a “noncommutative space” is needed; a full account of this approach is given in [3] by its main contributor, Alain Connes. The class field theory, i.e., number theory within the realm of Galois theory, is undoubtedly one of the main achievements in arithmetics, leading to an important algebraic machinery; for a modern overview, see [23]. The relationship between noncommutative geometry and number theory is one of the many themes treated in [22, 7-9, 11], a small part of which we will try to put in a more down-to-earth perspective, illustrating through an example what should be called an “application of physics to mathematics,” and our only purpose is to introduce nonspecialists to this beautiful area.
Effects of Aortic Irregularities on the Blood Flow
NASA Astrophysics Data System (ADS)
Gutmark-Little, Iris; Prahl-Wittberg, Lisa; van Wyk, Stevin; Mihaescu, Mihai; Fuchs, Laszlo; Backeljauw, Philippe; Gutmark, Ephraim
2013-11-01
Cardiovascular defects characterized by geometrical anomalies of the aorta and its effect on the blood flow are investigated. The flow characteristics change with the aorta geometry and the rheological properties of the blood. Flow characteristics such as wall shear stress often play an important role in the development of vascular disease. In the present study, blood is considered to be non-Newtonian and is modeled using the Quemada model, an empirical model that is valid for different red blood cell loading. Three patient-specific aortic geometries are studied using Large Eddy Simulations (LES). The three geometries represent malformations that are typical in patients populations having a genetic disorder called Turner syndrome. The results show a highly complex flow with regions of recirculation that are enhanced in two of the three aortas. Moreover, blood flow is diverted, due to the malformations, from the descending aorta to the three side branches of the arch. The geometry having an elongated transverse aorta has larger areas of strong oscillatory wall shear stress.
ERIC Educational Resources Information Center
Stephan, Michelle L.; McManus, George E.; Dickey, Ashley L.; Arb, Maxwell S.
2012-01-01
The process of developing definitions is underemphasized in most mathematics instruction. Investing time in constructing meaning is well worth the return in terms of the knowledge it imparts. In this article, the authors present a third approach to "defining," called "constructive." It involves modifying students' previous understanding of a term…
The Conic Sections in Taxicab Geometry: Some Investigations for High School Students.
ERIC Educational Resources Information Center
Prevost, Fernand J.
1998-01-01
Introduces the taxicab metric which is practical for many applications and helps students pursue interesting investigations while deepening their understanding of familiar topics. Uses the taxicab metric to explore the circle, ellipse, and hyperbola in the plane called TaxiLand. (ASK)
LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS
Einstein, Daniel R.; Dyedov, Vladimir
2010-01-01
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546
Tensionless Strings and Supersymmetric Sigma Models: Aspects of the Target Space Geometry
NASA Astrophysics Data System (ADS)
Bredthauer, Andreas
2007-01-01
In this thesis, two aspects of string theory are discussed, tensionless strings and supersymmetric sigma models. The equivalent to a massless particle in string theory is a tensionless string. Even almost 30 years after it was first mentioned, it is still quite poorly understood. We discuss how tensionless strings give rise to exact solutions to supergravity and solve closed tensionless string theory in the ten dimensional maximally supersymmetric plane wave background, a contraction of AdS(5)xS(5) where tensionless strings are of great interest due to their proposed relation to higher spin gauge theory via the AdS/CFT correspondence. For a sigma model, the amount of supersymmetry on its worldsheet restricts the geometry of the target space. For N=(2,2) supersymmetry, for example, the target space has to be bi-hermitian. Recently, with generalized complex geometry, a new mathematical framework was developed that is especially suited to discuss the target space geometry of sigma models in a Hamiltonian formulation. Bi-hermitian geometry is so-called generalized Kaehler geometry but the relation is involved. We discuss various amounts of supersymmetry in phase space and show that this relation can be established by considering the equivalence between the Hamilton and Lagrange formulation of the sigma model. In the study of generalized supersymmetric sigma models, we find objects that favor a geometrical interpretation beyond generalized complex geometry.
2016-05-01
A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1
2016-05-01
A9 CPU and 15 W for the i7 CPU. A method of accelerating this computation is by using a customized hardware unit called a field- programmable gate...implementation of custom logic to accelerate com- putational workloads. This FPGA fabric, in addition to the standard programmable logic, contains 220...chip; field- programmable gate array Daniel Gebhardt U U U U 18 (619) 553-2786 INITIAL DISTRIBUTION 84300 Library (2) 85300 Archive/Stock (1
Simulation of Etching in Chlorine Discharges Using an Integrated Feature Evolution-Plasma Model
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.; Biegel, Bryan (Technical Monitor)
2002-01-01
To better utilize its vast collection of heterogeneous resources that are geographically distributed across the United States, NASA is constructing a computational grid called the Information Power Grid (IPG). This paper describes various tools and techniques that we are developing to measure and improve the performance of a broad class of NASA applications when run on the IPG. In particular, we are investigating the areas of grid benchmarking, grid monitoring, user-level application scheduling, and decentralized system-level scheduling.
Translation and Rotation of Transformation Media under Electromagnetic Pulse
Gao, Fei; Shi, Xihang; Lin, Xiao; Xu, Hongyi; Zhang, Baile
2016-01-01
It is well known that optical media create artificial geometry for light, and curved geometry acts as an effective optical medium. This correspondence originates from the form invariance of Maxwell’s equations, which recently has spawned a booming field called ‘transformation optics’. Here we investigate responses of three transformation media under electromagnetic pulses, and find that pulse radiation can induce unbalanced net force on transformation media, which will cause translation and rotation of transformation media although their final momentum can still be zero. Therefore, the transformation media do not necessarily stay the same after an electromagnetic wave passes through. PMID:27321246
Translation and Rotation of Transformation Media under Electromagnetic Pulse.
Gao, Fei; Shi, Xihang; Lin, Xiao; Xu, Hongyi; Zhang, Baile
2016-06-20
It is well known that optical media create artificial geometry for light, and curved geometry acts as an effective optical medium. This correspondence originates from the form invariance of Maxwell's equations, which recently has spawned a booming field called 'transformation optics'. Here we investigate responses of three transformation media under electromagnetic pulses, and find that pulse radiation can induce unbalanced net force on transformation media, which will cause translation and rotation of transformation media although their final momentum can still be zero. Therefore, the transformation media do not necessarily stay the same after an electromagnetic wave passes through.
Predicting Long-Range Traversability from Short-Range Stereo-Derived Geometry
NASA Technical Reports Server (NTRS)
Turmon, Michael; Tang, Benyang; Howard, Andrew; Brjaracharya, Max
2010-01-01
Based only on its appearance in imagery, this program uses close-range 3D terrain analysis to produce training data sufficient to estimate the traversability of terrain beyond 3D sensing range. This approach is called learning from stereo (LFS). In effect, the software transfers knowledge from middle distances, where 3D geometry provides training cues, into the far field where only appearance is available. This is a viable approach because the same obstacle classes, and sometimes the same obstacles, are typically present in the mid-field and the farfield. Learning thus extends the effective look-ahead distance of the sensors.
Positivity, Grassmannian geometry and simplex-like structures of scattering amplitudes
NASA Astrophysics Data System (ADS)
Rao, Junjie
2017-12-01
This article revisits and elaborates the significant role of positive geometry of momentum twistor Grassmannian for planar N=4 SYM scattering amplitudes. First we establish the fundamentals of positive Grassmannian geometry for tree amplitudes, including the ubiquitous Plücker coordinates and the representation of reduced Grassmannian geometry. Then we formulate this subject, without making reference to on-shell diagrams and decorated permutations, around these four major facets: 1. Deriving the tree and 1-loop BCFW recursion relations solely from positivity, after introducing the simple building blocks called positive components for a positive matrix. 2. Applying Grassmannian geometry and Plücker coordinates to determine the signs of N2MHV homological identities, which interconnect various Yangian invariants. It reveals that most of the signs are in fact the secret incarnation of the simple 6-term NMHV identity. 3. Deriving the stacking positivity relation, which is powerful for parameterizing matrix representatives in terms of positive variables in the d log form. It will be used with the reduced Grassmannian geometry representation to produce the positive matrix of a given geometric configuration, which is an independent approach besides the combinatoric way involving a sequence of BCFW bridges. 4. Introducing an elegant and highly refined formalism of BCFW recursion relation for tree amplitudes, which reveals its two-fold simplex-like structures. First, the BCFW contour in terms of (reduced) Grassmannian geometry representatives is delicately dissected into a triangle-shape sum, as only a small fraction of the sum needs to be explicitly identified. Second, this fraction can be further dissected, according to different growing modes with corresponding growing parameters. The growing modes possess the shapes of solid simplices of various dimensions, with which infinite number of BCFW cells can be entirely captured by the characteristic objects called fully-spanning cells. We find that for a given k, beyond n =4 k+1 there is no more new fully-spanning cell, which signifies the essential termination of the recursive growth of BCFW cells. As n increases beyond the termination point, the BCFW contour simply replicates itself according to the simplex-like patterns, which enables us to master all BCFW cells once for all without actually identifying most of them.
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egan, A; Laub, W
2014-06-15
Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less
Pesch, Georg R; Du, Fei; Baune, Michael; Thöming, Jorg
2017-02-03
Insulator-based dielectrophoresis (iDEP) is a powerful particle analysis technique based on electric field scattering at material boundaries which can be used, for example, for particle filtration or to achieve chromatographic separation. Typical devices consist of microchannels containing an array of posts but large scale application was also successfully tested. Distribution and magnitude of the generated field gradients and thus the possibility to trap particles depends apart from the applied field strength on the material combination between post and surrounding medium and on the boundary shape. In this study we simulate trajectories of singe particles under the influence of positive DEP that are flowing past one single post due to an external fluid flow. We analyze the influence of key parameters (excitatory field strength, fluid flow velocity, particle size, distance from the post, post size, and cross-sectional geometry) on two benchmark criteria, i.e., a critical initial distance from the post so that trapping still occurs (at fixed particle size) and a critical minimum particle size necessary for trapping (at fixed initial distance). Our approach is fundamental and not based on finding an optimal geometry of insulating structures but rather aims to understand the underlying phenomena of particle trapping. A sensitivity analysis reveals that electric field strength and particle size have the same impact, as have fluid flow velocity and post dimension. Compared to these parameters the geometry of the post's cross-section (i.e. rhomboidal or elliptical with varying width-to-height or aspect ratio) has a rather small influence but can be used to optimize the trapping efficiency at a specific distance. We hence found an ideal aspect ratio for trapping for each base geometry and initial distance to the tip which is independent of the other parameters. As a result we present design criteria which we believe to be a valuable addition to the existing literature. Copyright © 2016 Elsevier B.V. All rights reserved.
Validation of CFD/Heat Transfer Software for Turbine Blade Analysis
NASA Technical Reports Server (NTRS)
Kiefer, Walter D.
2004-01-01
I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.
ERIC Educational Resources Information Center
Stuart, Stephen N.
2006-01-01
In this article, the author states that architects, musicians and other thoughtful people have, since the time of Pythagoras, been fascinated by various harmonious proportions. One, is the visual harmony attributed to Euclid, called "the golden section". He explores this concept in geometries of one, two and three dimensions. He added, that in…
Subsurface solute transport with one-, two-, and three-dimensional arbitrary shape sources
NASA Astrophysics Data System (ADS)
Chen, Kewei; Zhan, Hongbin; Zhou, Renjie
2016-07-01
Solutions with one-, two-, and three-dimensional arbitrary shape source geometries will be very helpful tools for investigating a variety of contaminant transport problems in the geological media. This study proposed a general method to develop new solutions for solute transport in a saturated, homogeneous aquifer (confined or unconfined) with a constant, unilateral groundwater flow velocity. Several typical source geometries, such as arbitrary line sources, vertical and horizontal patch sources, circular and volumetric sources, were considered. The sources can sit on the upper or lower aquifer boundary to simulate light non-aqueous-phase-liquids (LNAPLs) or dense non-aqueous-phase-liquids (DNAPLs), respectively, or can be located anywhere inside the aquifer. The developed new solutions were tested against previous benchmark solutions under special circumstances and were shown to be robust and accurate. Such solutions can also be used as a starting point for the inverse problem of source zone and source geometry identification in the future. The following findings can be obtained from analyzing the solutions. The source geometry, including shape and orientation, generally played an important role for the concentration profile through the entire transport process. When comparing the inclined line sources with the horizontal line sources, the concentration contours expanded considerably along the vertical direction, and shrank considerably along the groundwater flow direction. A planar source sitting on the upper aquifer boundary (such as a LNAPL pool) would lead to significantly different concentration profiles compared to a planar source positioned in a vertical plane perpendicular to the flow direction. For a volumetric source, its dimension along the groundwater flow direction became less important compared to its other two dimensions.
Three- and four-body nonadditivities in nucleic acid tetramers: a CCSD(T) study.
Pitonák, M; Neogrády, P; Hobza, P
2010-02-14
Three- and four-body nonadditivities in the uracil tetramer (in DNA-like geometry) and the GC step (in crystal geometry) were investigated at various levels of the wave-function theory: HF, MP2, MP3, L-CCD, CCSD and CCSD(T). All of the calculations were performed using the 6-31G**(0.25,0.15) basis set, whereas the HF, MP2 and the MP3 nonadditivities were, for the sake of comparison, also determined with the much larger aug-cc-pVDZ basis set. The HF and MP2 levels do not provide reliable values for many-body terms, making it necessary to go beyond the MP2 level. The benchmark CCSD(T) three- and four-body nonadditivities are reasonably well reproduced at the MP3 level, and almost quantitative agreement is obtained (fortuitously) either on the L-CCD level or as an average of the MP3 and the CCSD results. Reliable values of many-body terms (especially their higher-order correlation contributions) are obtained already when the rather small 6-31G**(0.25,0.15) basis set is used. The four-body term is much smaller when compared to the three-body terms, but it is definitely not negligible, e.g. in the case of the GC step it represents about 16% of all of the three- and four-body terms. While investigating the geometry dependence of many-body terms for the GG step at the MP3/6-31G**(0.25,0.15) level, we found that it is necessary to include at least three-body terms in the determination of optimal geometry parameters.
NASA Astrophysics Data System (ADS)
Spotts, Nathan
As modern trends in commercial aircraft design move toward high-bypass-ratio fan systems of increasing diameter with shorter, nonaxisymmetric nacelle geometries, inlet distortion is becoming common in all operating regimes. The distortion may induce aerodynamic instabilities within the fan system, leading to catastrophic damage to fan blades, should the surge margin be exceeded. Even in the absence of system instability, the heterogeneity of the flow affects aerodynamic performance significantly. Therefore, an understanding of fan-distortion interaction is critical to aircraft engine system design. This thesis research elucidates the complex fluid dynamics and fan-distortion interaction by means of computational fluid dynamics (CFD) modeling of a complete engine fan system; including rotor, stator, spinner, nacelle and nozzle; under conditions typical of those encountered by commercial aircraft. The CFD simulations, based on a Reynolds-averaged Navier-Stokes (RANS) approach, were unsteady, three-dimensional, and of a full-annulus geometry. A thorough, systematic validation has been performed for configurations from a single passage of a rotor to a full-annulus system by comparing the predicted flow characteristics and aerodynamic performance to those found in literature. The original contributions of this research include the integration of a complete engine fan system, based on the NASA rotor 67 transonic stage and representative of the propulsion systems in commercial aircraft, and a benchmark case for unsteady RANS simulations of distorted flow in such a geometry under realistic operating conditions. This study is unique in that the complex flow dynamics, resulting from fan-distortion interaction, were illustrated in a practical geometry under realistic operating conditions. For example, the compressive stage is shown to influence upstream static pressure distributions and thus suppress separation of flow on the nacelle. Knowledge of such flow physics is valuable for engine system design.
A linguistic geometry for space applications
NASA Technical Reports Server (NTRS)
Stilman, Boris
1994-01-01
We develop a formal theory, the so-called Linguistic Geometry, in order to discover the inner properties of human expert heuristics, which were successful in a certain class of complex control systems, and apply them to different systems. This research relies on the formalization of search heuristics of high-skilled human experts which allow for the decomposition of complex system into the hierarchy of subsystems, and thus solve intractable problems reducing the search. The hierarchy of subsystems is represented as a hierarchy of formal attribute languages. This paper includes a formal survey of the Linguistic Geometry, and new example of a solution of optimization problem for the space robotic vehicles. This example includes actual generation of the hierarchy of languages, some details of trajectory generation and demonstrates the drastic reduction of search in comparison with conventional search algorithms.
Poincarés philosophy of geometry, or does geometric conventionalism deserve its name?
NASA Astrophysics Data System (ADS)
Zahar, E. G.
Two main aims are pursued in this paper. The first is to show that, in mathematical geometry, Poincaré was a conventionalist who rejected all forms of synthetic a priori geometric intuition. He moreover followed a unified heuristic based on the study of certain groups of Möbius transformations. This method was informed by his work on the theory of Fuchsian functions; it yielded two models of hyperbolic geometry: the disk model and the Poincaré half-plane, which are connected by a Möbius transformation. From these group-theoretic considerations Poincaré derived an expression for the Riemannian distance. I secondly defend the thesis that, in physical geometry, Poincaré was a structural realist whose so-called conventionalism was epistemological, not ontological. Here he started directly from a Riemannian metric together with an associated universal field. He adopted a realist attitude towards both the field and that geometry which is most coherently integrated into some highly unified and empirically confirmed hypothesis. More generally, he looked upon the degree of unity of any system as an index of its verisimilitude. I finally show that, by Einsteins own admission, GTR is compatible with Poincarés epistemological theses.
David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García
2017-01-06
Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.
Correlation consistent basis sets for lanthanides: The atoms La–Lu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples,more » CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.« less
Noncovalent interactions between cisplatin and graphene prototypes.
Cuevas-Flores, Ma Del Refugio; Garcia-Revilla, Marco Antonio; Bartolomei, Massimiliano
2018-01-15
Cisplatin (CP) has been widely used as an anticancer drug for more than 30 years despite severe side effects due to its low bioavailability and poor specificity. For this reason, it is paramount to study and design novel nanomaterials to be used as vectors capable to effectively deliver the drug to the biological target. The CP square-planar geometry, together with its low water solubility, suggests that it could be possibly easily adsorbed on 2D graphene nanostructures through the interaction with the related highly conjugated π-electron system. In this work, pyrene has been first selected as the minimum approximation to the graphene plane, which allows to properly study the noncovalent interactions determining the CP adsorption. In particular, electronic structure calculations at the MP2C and DFT-SAPT levels of theory have allowed to obtain benchmark interaction energies for some limiting configurations of the CP-pyrene complex, as well as to assess the role of the different contributions to the total interaction: it has been found that the parallel configurations of the aggregate are mainly stabilized around the minimum region by dispersion, in a similar way as for complexes bonded through π-π interactions. Then, the benchmark interaction energies have been used to test corresponding estimations obtained within the less expensive DFT to validate an optimal exchange-correlation functional which includes corrections to take properly into account for the dispersion contribution. Reliable DFT interaction energies have been therefore obtained for CP adsorbed on graphene prototypes of increasing size, ranging from coronene, ovalene, and up to C 150 H 30 . Finally, DFT geometry optimizations and frequency calculations have also allowed a reliable estimation of the adsorption enthalpy of CP on graphene, which is found particularly favorable (about -20 kcal/mol at 298 K and 1 bar) being twice that estimated for the corresponding benzene adsorption. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Bauer, S M; Lane, J P; Stone, V I; Unnikrishnan, N
1998-01-01
The Rehabilitation Engineering Research Center on Technology Evaluation and Transfer is exploring how the end users of assistive technology devices define the ideal device. This work is called the Consumer Ideal Product program. In this work, end users identify and establish the importance of a broad range of product design features, along with the related product support and service provided by manufacturers and vendors. This paper describes a method for systematically transforming end-user defined requirements into a form that is useful and accessible to product designers, manufacturers, and vendors. In particular, product requirements, importance weightings, and metrics are developed from the Consumer Ideal Product battery charger outcomes. Six battery charges are benchmarked against these product requirements using the metrics developed. The results suggest improvements for each product's design, service, and support. Overall, the six chargers meet roughly 45-75% of the ideal product's requirements. Many of the suggested improvements are low-cost changes that, if adopted, could provide companies a competitive advantage in the marketplace.
Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200
Hierarchical artificial bee colony algorithm for RFID network planning optimization.
Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong
2014-01-01
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.
NASA Astrophysics Data System (ADS)
Chung, Kyung Tae; Lee, Jong Woo
1989-08-01
A connection which is both Einstein and semisymmetric is called an SE connection, and a generalized n-dimensional Riemannian manifold on which the differential geometric structure is imposed by g λμ through an SE connection is called an n-dimensional SE manifold and denoted by SEXn. This paper is a direct continuation of earlier work. In this paper, we derive the generalized fundamental equations for the hypersubmanifold of SEXn, including generalized Gauss formulas, generalized Weingarten equations, and generalized Gauss-Codazzi equations.
Exploring the potential energy landscape over a large parameter-space
NASA Astrophysics Data System (ADS)
He, Yang-Hui; Mehta, Dhagash; Niemerg, Matthew; Rummel, Markus; Valeanu, Alexandru
2013-07-01
Solving large polynomial systems with coefficient parameters are ubiquitous and constitute an important class of problems. We demonstrate the computational power of two methods — a symbolic one called the Comprehensive Gröbner basis and a numerical one called coefficient-parameter polynomial continuation — applied to studying both potential energy landscapes and a variety of questions arising from geometry and phenomenology. Particular attention is paid to an example in flux compactification where important physical quantities such as the gravitino and moduli masses and the string coupling can be efficiently extracted.
Validation of electronic structure methods for isomerization reactions of large organic molecules.
Luo, Sijie; Zhao, Yan; Truhlar, Donald G
2011-08-14
In this work the ISOL24 database of isomerization energies of large organic molecules presented by Huenerbein et al. [Phys. Chem. Chem. Phys., 2010, 12, 6940] is updated, resulting in the new benchmark database called ISOL24/11, and this database is used to test 50 electronic model chemistries. To accomplish the update, the very expensive and highly accurate CCSD(T)-F12a/aug-cc-pVDZ method is first exploited to investigate a six-reaction subset of the 24 reactions, and by comparison of various methods with the benchmark, MCQCISD-MPW is confirmed to be of high accuracy. The final ISOL24/11 database is composed of six reaction energies calculated by CCSD(T)-F12a/aug-cc-pVDZ and 18 calculated by MCQCISD-MPW. We then tested 40 single-component density functionals (both local and hybrid), eight doubly hybrid functionals, and two other methods against ISOL24/11. It is found that the SCS-MP3/CBS method, which is used as benchmark for the original ISOL24, has an MUE of 1.68 kcal mol(-1), which is close to or larger than some of the best tested DFT methods. Using the new benchmark, we find ωB97X-D and MC3MPWB to be the best single-component and doubly hybrid functionals respectively, with PBE0-D3 and MC3MPW performing almost as well. The best single-component density functionals without molecular mechanics dispersion-like terms are M08-SO, M08-HX, M05-2X, and M06-2X. The best single-component density functionals without Hartree-Fock exchange are M06-L-D3 when MM terms are included and M06-L when they are not.
Assessment of composite motif discovery methods.
Klepper, Kjetil; Sandve, Geir K; Abul, Osman; Johansen, Jostein; Drablos, Finn
2008-02-26
Computational discovery of regulatory elements is an important area of bioinformatics research and more than a hundred motif discovery methods have been published. Traditionally, most of these methods have addressed the problem of single motif discovery - discovering binding motifs for individual transcription factors. In higher organisms, however, transcription factors usually act in combination with nearby bound factors to induce specific regulatory behaviours. Hence, recent focus has shifted from single motifs to the discovery of sets of motifs bound by multiple cooperating transcription factors, so called composite motifs or cis-regulatory modules. Given the large number and diversity of methods available, independent assessment of methods becomes important. Although there have been several benchmark studies of single motif discovery, no similar studies have previously been conducted concerning composite motif discovery. We have developed a benchmarking framework for composite motif discovery and used it to evaluate the performance of eight published module discovery tools. Benchmark datasets were constructed based on real genomic sequences containing experimentally verified regulatory modules, and the module discovery programs were asked to predict both the locations of these modules and to specify the single motifs involved. To aid the programs in their search, we provided position weight matrices corresponding to the binding motifs of the transcription factors involved. In addition, selections of decoy matrices were mixed with the genuine matrices on one dataset to test the response of programs to varying levels of noise. Although some of the methods tested tended to score somewhat better than others overall, there were still large variations between individual datasets and no single method performed consistently better than the rest in all situations. The variation in performance on individual datasets also shows that the new benchmark datasets represents a suitable variety of challenges to most methods for module discovery.
SIMYAR: a cable-yarding simulation model.
R.J. McGaughey; R.H. Twito
1987-01-01
A skyline-logging simulation model designed to help planners evaluate potential yarding options and alternative harvest plans is presented. The model, called SIMYAR, uses information about the timber stand, yarding equipment, and unit geometry to estimate yarding co stand productivity for a particular operation. The costs of felling, bucking, loading, and hauling are...
ERIC Educational Resources Information Center
Canadas, Maria; Molina, Marta; Gallardo, Sandra; Martinez-Santaolalla, Manuel; Penas, Maria
2010-01-01
Making constructions with paper is called "origami" and is considered an art. The objective for many fans of origami is to design new figures never constructed before. From the point of view of mathematics education, origami is an interesting didactic activity. In this article, the authors propose to help High School students understand new…
Relational Information Management Data-Base System
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Erickson, W. J.; Gray, F. P.; Comfort, D. L.; Wahlstrom, S. O.; Von Limbach, G.
1985-01-01
DBMS with several features particularly useful to scientists and engineers. RIM5 interfaced with any application program written in language capable of Calling FORTRAN routines. Applications include data management for Space Shuttle Columbia tiles, aircraft flight tests, high-pressure piping, atmospheric chemistry, census, university registration, CAD/CAM Geometry, and civil-engineering dam construction.
Multi-Party, Whole-Body Interactions in Mathematical Activity
ERIC Educational Resources Information Center
Ma, Jasmine Y.
2017-01-01
This study interrogates the contributions of multi-party, whole-body interactions to students' collaboration and negotiation of mathematics ideas in a task setting called walking scale geometry, where bodies in interaction became complex resources for students' emerging goals in problem solving. Whole bodies took up overlapping roles representing…
ERIC Educational Resources Information Center
Cepic, Mojca
2008-01-01
Light beams in wavy unclear water, also called underwater rays, and caustic networks of light formed at the bottom of shallow water are two faces of a single phenomenon. Derivation of the caustic using only simple geometry, Snell's law and simple derivatives accounts for observations such as the existence of the caustic network on vertical walls,…
On the fundamental unsteady fluid dynamics of shock-induced flows through ducts
NASA Astrophysics Data System (ADS)
Mendoza, Nicole Renee
Unsteady shock wave propagation through ducts has many applications, ranging from blast wave shelter design to advanced high-speed propulsion systems. The research objective of this study was improved fundamental understanding of the transient flow structures during unsteady shock wave propagation through rectangular ducts with varying cross-sectional area. This research focused on the fluid dynamics of the unsteady shock-induced flow fields, with an emphasis placed on understanding and characterizing the mechanisms behind flow compression (wave structures), flow induction (via shock waves), and enhanced mixing (via shock-induced viscous shear layers). A theoretical and numerical (CFD) parametric study was performed, in which the effects of these parameters on the unsteady flow fields were examined: incident shock strength, area ratio, and viscous mode (inviscid, laminar, and turbulent). Two geometries were considered: the backward-facing step (BFS) geometry, which provided a benchmark and conceptual framework, and the splitter plate (SP) geometry, which was a canonical representation of the engine flow path. The theoretical analysis was inviscid, quasi-1 D and quasi-steady; and the computational analysis was fully 2D, time-accurate, and VISCOUS. The theory provided the wave patterns and primary wave strengths for the BFS geometry, and the simulations verified the wave pattems and quantified the effects of geometry and viscosity. It was shown that the theoretical wave patterns on the BFS geometry can be used to systematically analyze the transient, 20, viscous flows on the SP geometry. This work also highlighted the importance and the role of oscillating shock and expansion waves in the development of these unsteady flows. The potential for both upstream and downstream flow induction was addressed. Positive upstream flow induction was not found in this study due to the persistent formation of an upstream-moving shock wave. Enhanced mixing was addressed by examining the evolution of the unsteady shear layer, its instability, and their effects on the flow field. The instability always appeared after the reflected shock interaction, and was exacerbated in the laminar cases and damped out in the turbulent cases. This research provided new understanding of the long-term evolution of these confined flows. Lastly, the turbulent work is one of the few turbulent studies on these flows.
Realistic sampling of amino acid geometries for a multipolar polarizable force field
Hughes, Timothy J.; Cardamone, Salvatore
2015-01-01
The Quantum Chemical Topological Force Field (QCTFF) uses the machine learning method kriging to map atomic multipole moments to the coordinates of all atoms in the molecular system. It is important that kriging operates on relevant and realistic training sets of molecular geometries. Therefore, we sampled single amino acid geometries directly from protein crystal structures stored in the Protein Databank (PDB). This sampling enhances the conformational realism (in terms of dihedral angles) of the training geometries. However, these geometries can be fraught with inaccurate bond lengths and valence angles due to artefacts of the refinement process of the X‐ray diffraction patterns, combined with experimentally invisible hydrogen atoms. This is why we developed a hybrid PDB/nonstationary normal modes (NM) sampling approach called PDB/NM. This method is superior over standard NM sampling, which captures only geometries optimized from the stationary points of single amino acids in the gas phase. Indeed, PDB/NM combines the sampling of relevant dihedral angles with chemically correct local geometries. Geometries sampled using PDB/NM were used to build kriging models for alanine and lysine, and their prediction accuracy was compared to models built from geometries sampled from three other sampling approaches. Bond length variation, as opposed to variation in dihedral angles, puts pressure on prediction accuracy, potentially lowering it. Hence, the larger coverage of dihedral angles of the PDB/NM method does not deteriorate the predictive accuracy of kriging models, compared to the NM sampling around local energetic minima used so far in the development of QCTFF. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26235784
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
Mesh generation has long been recognized as a bottleneck in the CFD process. While much research on automating the volume mesh generation process have been relatively successful,these methods rely on appropriate initial surface triangulation to work properly. Surface discretization has been one of the least automated steps in computational simulation due to its dependence on implicitly defined CAD surfaces and curves. Differences in CAD peometry engines manifest themselves in discrepancies in their interpretation of the same entities. This lack of "good" geometry causes significant problems for mesh generators, requiring users to "repair" the CAD geometry before mesh generation. The problem is exacerbated when CAD geometry is translated to other forms (e.g., IGES )which do not include important topological and construction information in addition to entity geometry. One technique to avoid these problems is to access the CAD geometry directly from the mesh generating software, rather than through files. By accessing the geometry model (not a discretized version) in its native environment, t h s a proach avoids translation to a format which can deplete the model of topological information. Our approach to enable models developed in the Denali software environment to directly access CAD geometry and functions is through an Application Programming Interface (API) known as CAPRI. CAPRI provides a layer of indirection through which CAD-specific data may be accessed by an application program using CAD-system neutral C and FORTRAN language function calls. CAPRI supports a general set of CAD operations such as truth testing, geometry construction and entity queries.
Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction
Laehnemann, David; Borkhardt, Arndt
2016-01-01
Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159
Summary of ORSphere Critical and Reactor Physics Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.; Bess, John D.
In the early 1970s Dr. John T. Mihalczo (team leader), J. J. Lynn, and J. R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVAmore » I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is summary summarize all the critical and reactor physics measurements evaluations and, when possible, to compare them to GODIVA experiment results.« less
Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J
2006-04-03
There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions.
Liseron-Monfils, Christophe; Lewis, Tim; Ashlock, Daniel; McNicholas, Paul D; Fauteux, François; Strömvik, Martina; Raizada, Manish N
2013-03-15
The discovery of genetic networks and cis-acting DNA motifs underlying their regulation is a major objective of transcriptome studies. The recent release of the maize genome (Zea mays L.) has facilitated in silico searches for regulatory motifs. Several algorithms exist to predict cis-acting elements, but none have been adapted for maize. A benchmark data set was used to evaluate the accuracy of three motif discovery programs: BioProspector, Weeder and MEME. Analysis showed that each motif discovery tool had limited accuracy and appeared to retrieve a distinct set of motifs. Therefore, using the benchmark, statistical filters were optimized to reduce the false discovery ratio, and then remaining motifs from all programs were combined to improve motif prediction. These principles were integrated into a user-friendly pipeline for motif discovery in maize called Promzea, available at http://www.promzea.org and on the Discovery Environment of the iPlant Collaborative website. Promzea was subsequently expanded to include rice and Arabidopsis. Within Promzea, a user enters cDNA sequences or gene IDs; corresponding upstream sequences are retrieved from the maize genome. Predicted motifs are filtered, combined and ranked. Promzea searches the chosen plant genome for genes containing each candidate motif, providing the user with the gene list and corresponding gene annotations. Promzea was validated in silico using a benchmark data set: the Promzea pipeline showed a 22% increase in nucleotide sensitivity compared to the best standalone program tool, Weeder, with equivalent nucleotide specificity. Promzea was also validated by its ability to retrieve the experimentally defined binding sites of transcription factors that regulate the maize anthocyanin and phlobaphene biosynthetic pathways. Promzea predicted additional promoter motifs, and genome-wide motif searches by Promzea identified 127 non-anthocyanin/phlobaphene genes that each contained all five predicted promoter motifs in their promoters, perhaps uncovering a broader co-regulated gene network. Promzea was also tested against tissue-specific microarray data from maize. An online tool customized for promoter motif discovery in plants has been generated called Promzea. Promzea was validated in silico by its ability to retrieve benchmark motifs and experimentally defined motifs and was tested using tissue-specific microarray data. Promzea predicted broader networks of gene regulation associated with the historic anthocyanin and phlobaphene biosynthetic pathways. Promzea is a new bioinformatics tool for understanding transcriptional gene regulation in maize and has been expanded to include rice and Arabidopsis.
Evaluating interaction energies of weakly bonded systems using the Buckingham-Hirshfeld method
NASA Astrophysics Data System (ADS)
Krishtal, A.; Van Alsenoy, C.; Geerlings, P.
2014-05-01
We present the finalized Buckingham-Hirshfeld method (BHD-DFT) for the evaluation of interaction energies of non-bonded dimers with Density Functional Theory (DFT). In the method, dispersion energies are evaluated from static multipole polarizabilities, obtained on-the-fly from Coupled Perturbed Kohn-Sham calculations and partitioned into diatomic contributions using the iterative Hirshfeld partitioning method. The dispersion energy expression is distributed over four atoms and has therefore a higher delocalized character compared to the standard pairwise expressions. Additionally, full multipolar polarizability tensors are used as opposed to effective polarizabilities, allowing to retain the anisotropic character at no additional computational cost. A density dependent damping function for the BLYP, PBE, BP86, B3LYP, and PBE0 functionals has been implemented, containing two global parameters which were fitted to interaction energies and geometries of a selected number of dimers using a bi-variate RMS fit. The method is benchmarked against the S22 and S66 data sets for equilibrium geometries and the S22x5 and S66x8 data sets for interaction energies around the equilibrium geometry. Best results are achieved using the B3LYP functional with mean average deviation values of 0.30 and 0.24 kcal/mol for the S22 and S66 data sets, respectively. This situates the BHD-DFT method among the best performing dispersion inclusive DFT methods. Effect of counterpoise correction on DFT energies is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Nityananda; Gadre, Shridhar R.; Bandyopadhyay, Pradipta
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving (MCTBP) sampling of the cluster’s Potential Energy Surface (PES) with the Effective Fragment Potential (EFP), subsequent geometry optimization using the Molecular Tailoring fragmentation Approach (MTA) and final refinement at the second order Møller Plesset perturbation (MP2) level of theory. The MTA geometry optimizations usedmore » between 14 and 18 main fragments with maximum sizes between 11 and 14 water molecules and average size of 10 water molecules, whose energies and gradients were computed at the MP2 level. The MTA-MP2 optimized geometries were found to be quite close (within < 0.5 kcal/mol) to the ones obtained from the MP2 optimization of the whole cluster. The grafting of the MTA-MP2 energies yields electronic energies that are within < 5×10-4 a.u. from the MP2 results for the whole cluster while preserving their energy order. The MTA-MP2 method was also found to reproduce the MP2 harmonic vibrational frequencies in both the HOH bending and the OH stretching regions.« less
NASA Astrophysics Data System (ADS)
Davoudinejad, A.; Ribo, M. M.; Pedersen, D. B.; Islam, A.; Tosello, G.
2018-08-01
Functional surfaces have proven their potential to solve many engineering problems, attracting great interest among the scientific community. Bio-inspired multi-hierarchical micro-structures grant the surfaces with new properties, such as hydrophobicity, adhesion, unique optical properties and so on. The geometry and fabrication of these surfaces are still under research. In this study, the feasibility of using direct fabrication of microscale features by additive manufacturing (AM) processes was investigated. The investigation was carried out using a specifically designed vat photopolymerization AM machine-tool suitable for precision manufacturing at the micro dimensional scale which has previously been developed, built and validated at the Technical University of Denmark. It was shown that it was possible to replicate a simplified surface inspired by the Tokay gecko, the geometry was previously designed and replicated by a complex multi-step micromanufacturing method extracted from the literature and used as benchmark. Ultimately, the smallest printed features were analyzed by conducting a sensitivity analysis to obtain the righteous parameters in terms of layer thickness and exposure time. Moreover, two more intricate designs were fabricated with the same parameters to assess the surfaces functionality by its wettability. The surface with increased density and decreased feature size showed a water contact angle (CA) of 124° ± 0.10°, agreeing with the Cassie–Baxter model. These results indicate the possibility of using precision AM for a rapid, easy and reliable fabrication method for functional surfaces.
Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics
NASA Astrophysics Data System (ADS)
Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2014-11-01
In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.
Evaluating interaction energies of weakly bonded systems using the Buckingham-Hirshfeld method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishtal, A.; Van Alsenoy, C.; Geerlings, P.
2014-05-14
We present the finalized Buckingham-Hirshfeld method (BHD-DFT) for the evaluation of interaction energies of non-bonded dimers with Density Functional Theory (DFT). In the method, dispersion energies are evaluated from static multipole polarizabilities, obtained on-the-fly from Coupled Perturbed Kohn-Sham calculations and partitioned into diatomic contributions using the iterative Hirshfeld partitioning method. The dispersion energy expression is distributed over four atoms and has therefore a higher delocalized character compared to the standard pairwise expressions. Additionally, full multipolar polarizability tensors are used as opposed to effective polarizabilities, allowing to retain the anisotropic character at no additional computational cost. A density dependent damping functionmore » for the BLYP, PBE, BP86, B3LYP, and PBE0 functionals has been implemented, containing two global parameters which were fitted to interaction energies and geometries of a selected number of dimers using a bi-variate RMS fit. The method is benchmarked against the S22 and S66 data sets for equilibrium geometries and the S22x5 and S66x8 data sets for interaction energies around the equilibrium geometry. Best results are achieved using the B3LYP functional with mean average deviation values of 0.30 and 0.24 kcal/mol for the S22 and S66 data sets, respectively. This situates the BHD-DFT method among the best performing dispersion inclusive DFT methods. Effect of counterpoise correction on DFT energies is discussed.« less
NASA Astrophysics Data System (ADS)
Zapiór, Maciej; Martínez-Gómez, David
2016-02-01
Based on the data collected by the Vacuum Tower Telescope located in the Teide Observatory in the Canary Islands, we analyzed the three-dimensional (3D) motion of so-called knots in a solar prominence of 2014 June 9. Trajectories of seven knots were reconstructed, giving information of the 3D geometry of the magnetic field. Helical motion was detected. From the equipartition principle, we estimated the lower limit of the magnetic field in the prominence to ≈1-3 G and from the Ampère’s law the lower limit of the electric current to ≈1.2 × 109 A.
Detection of a Pool in Semi-Continuous Castings Made of Heat-Treatable Aluminum Alloys
NASA Astrophysics Data System (ADS)
Krushenko, G. G.; Nazarov, V. P.
2017-12-01
Various products (sheets, sections, etc.) manufactured by metal forming (rolled products, forged pieces, etc.) from semi-continuous castings are widely used in the aerospace industry. The so-called pool, which is the conical volume of a liquid metal, exists at the top of the liquid metal. Experience demonstrates that the geometry, the depth, and the shape of the pool substantially affect the structure formation in a casting and its quality. The application of a titanium nitride nanopowder, which is introduced in a melt in the volume of a rod, as a modifier allowed us to find the exact geometry of the pool.
A solid reactor core thermal model for nuclear thermal rockets
NASA Astrophysics Data System (ADS)
Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.
1991-01-01
A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.
NASA Astrophysics Data System (ADS)
Court, Sébastien; Fournié, Michel
2015-05-01
The paper extends a stabilized fictitious domain finite element method initially developed for the Stokes problem to the incompressible Navier-Stokes equations coupled with a moving solid. This method presents the advantage to predict an optimal approximation of the normal stress tensor at the interface. The dynamics of the solid is governed by the Newton's laws and the interface between the fluid and the structure is materialized by a level-set which cuts the elements of the mesh. An algorithm is proposed in order to treat the time evolution of the geometry and numerical results are presented on a classical benchmark of the motion of a disk falling in a channel.
TRAC-PF1/MOD1 support calculations for the MIST/OTIS program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujita, R.K.; Knight, T.D.
1984-01-01
We are using the Transient Reactor Analysis Code (TRAC), specifically version TRAC-PF1/MOD1, to perform analyses in support of the MultiLoop Integral-System Test (MIST) and the Once-Through Integral-System (OTIS) experiment program. We have analyzed Geradrohr Dampferzeuger Anlage (GERDA) Test 1605AA to benchmark the TRAC-PF1/MOD1 code against phenomena expected to occur in a raised-loop B and W plant during a small-break loss-of-coolant accident (SBLOCA). These results show that the code can calculate both single- and two-phase natural circulation, flow interruption, boiler-condenser-mode (BCM) heat transfer, and primary-system refill in a B and W-type geometry with low-elevation auxiliary feedwater. 19 figures, 7 tables.
NASA Astrophysics Data System (ADS)
Gros, P.; Bernard, D.
2017-02-01
We benchmark various available event generators in Geant4 and EGS5 in the light of ongoing projects for high angular-resolution pair-conversion telescopes at low energy. We compare the distributions of key kinematic variables extracted from the geometry of the three final state particles. We validate and use as reference an exact generator using the full 5D differential cross-section of the conversion process. We focus in particular on the effect of the unmeasured recoiling nucleus on the angular resolution. We show that for high resolution trackers, the choice of the generator affects the estimated resolution of the telescope. We also show that the current available generator are unable to describe accurately a linearly polarised photon source.
Epidermal differential impedance sensor for conformal skin hydration monitoring.
Huang, Xian; Yeo, Woon-Hong; Liu, Yuhao; Rogers, John A
2012-12-01
We present the design and use of an ultrathin, stretchable sensor system capable of conformal lamination onto the skin, for precision measurement and spatial mapping of levels of hydration. This device, which we refer to as a class of 'epidermal electronics' due to its 'skin-like' construction and mode of intimate integration with the body, contains miniaturized arrays of impedance-measurement electrodes arranged in a differential configuration to compensate for common-mode disturbances. Experimental results obtained with different frequencies and sensor geometries demonstrate excellent precision and accuracy, as benchmarked against conventional, commercial devices. The reversible, non-invasive soft contact of this device with the skin makes its operation appealing for applications ranging from skin care, to athletic monitoring to health/wellness assessment.
Progress in Unsteady Turbopump Flow Simulations Using Overset Grid Systems
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Chan, William; Kwak, Dochan
2002-01-01
This viewgraph presentation provides information on unsteady flow simulations for the Second Generation RLV (Reusable Launch Vehicle) baseline turbopump. Three impeller rotations were simulated by using a 34.3 million grid points model. MPI/OpenMP hybrid parallelism and MLP shared memory parallelism has been implemented and benchmarked in INS3D, an incompressible Navier-Stokes solver. For RLV turbopump simulations a speed up of more than 30 times has been obtained. Moving boundary capability is obtained by using the DCF module. Scripting capability from CAD geometry to solution is developed. Unsteady flow simulations for advanced consortium impeller/diffuser by using a 39 million grid points model are currently underway. 1.2 impeller rotations are completed. The fluid/structure coupling is initiated.
Water adsorption on a copper formate paddlewheel model of CuBTC: A comparative MP2 and DFT study
NASA Astrophysics Data System (ADS)
Toda, Jordi; Fischer, Michael; Jorge, Miguel; Gomes, José R. B.
2013-11-01
Simultaneous adsorption of two water molecules on open metal sites of the HKUST-1 metal-organic framework (MOF), modeled with a Cu2(HCOO)4 cluster, was studied by means of density functional theory (DFT) and second-order Moller-Plesset (MP2) approaches together with correlation consistent basis sets. Experimental geometries and MP2 energetic data extrapolated to the complete basis set limit were used as benchmarks for testing the accuracy of several different exchange-correlation functionals in the correct description of the water-MOF interaction. M06-L and some LC-DFT methods arise as the most appropriate in terms of the quality of geometrical data, energetic data and computational resources needed.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.
Parametric Deformation of Discrete Geometry for Aerodynamic Shape Design
NASA Technical Reports Server (NTRS)
Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian
2012-01-01
We present a versatile discrete geometry manipulation platform for aerospace vehicle shape optimization. The platform is based on the geometry kernel of an open-source modeling tool called Blender and offers access to four parametric deformation techniques: lattice, cage-based, skeletal, and direct manipulation. Custom deformation methods are implemented as plugins, and the kernel is controlled through a scripting interface. Surface sensitivities are provided to support gradient-based optimization. The platform architecture allows the use of geometry pipelines, where multiple modelers are used in sequence, enabling manipulation difficult or impossible to achieve with a constructive modeler or deformer alone. We implement an intuitive custom deformation method in which a set of surface points serve as the design variables and user-specified constraints are intrinsically satisfied. We test our geometry platform on several design examples using an aerodynamic design framework based on Cartesian grids. We examine inverse airfoil design and shape matching and perform lift-constrained drag minimization on an airfoil with thickness constraints. A transport wing-fuselage integration problem demonstrates the approach in 3D. In a final example, our platform is pipelined with a constructive modeler to parabolically sweep a wingtip while applying a 1-G loading deformation across the wingspan. This work is an important first step towards the larger goal of leveraging the investment of the graphics industry to improve the state-of-the-art in aerospace geometry tools.
Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.N.; Chang, B.; Hanebutte, U.R.
1999-12-29
Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less
2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation
NASA Technical Reports Server (NTRS)
Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.
2009-01-01
A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.
2018-01-01
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
IPRT polarized radiative transfer model intercomparison project - Phase A
NASA Astrophysics Data System (ADS)
Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Korkin, Sergey; Ota, Yoshifumi; Labonnote, Laurent C.; Lyapustin, Alexei; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred
2015-10-01
The polarization state of electromagnetic radiation scattered by atmospheric particles such as aerosols, cloud droplets, or ice crystals contains much more information about the optical and microphysical properties than the total intensity alone. For this reason an increasing number of polarimetric observations are performed from space, from the ground and from aircraft. Polarized radiative transfer models are required to interpret and analyse these measurements and to develop retrieval algorithms exploiting polarimetric observations. In the last years a large number of new codes have been developed, mostly for specific applications. Benchmark results are available for specific cases, but not for more sophisticated scenarios including polarized surface reflection and multi-layer atmospheres. The International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to fill this gap. This paper presents the results of the first phase A of the IPRT project which includes ten test cases, from simple setups with only one layer and Rayleigh scattering to rather sophisticated setups with a cloud embedded in a standard atmosphere above an ocean surface. All scenarios in the first phase A of the intercomparison project are for a one-dimensional plane-parallel model geometry. The commonly established benchmark results are available at the IPRT website.
NASA Astrophysics Data System (ADS)
Pilati, Sebastiano; Zintchenko, Ilia; Troyer, Matthias; Ancilotto, Francesco
2018-04-01
We benchmark the ground state energies and the density profiles of atomic repulsive Fermi gases in optical lattices (OLs) computed via density functional theory (DFT) against the results of diffusion Monte Carlo (DMC) simulations. The main focus is on a half-filled one-dimensional OLs, for which the DMC simulations performed within the fixed-node approach provide unbiased results. This allows us to demonstrate that the local spin-density approximation (LSDA) to the exchange-correlation functional of DFT is very accurate in the weak and intermediate interactions regime, and also to underline its limitations close to the strongly-interacting Tonks-Girardeau limit and in very deep OLs. We also consider a three-dimensional OL at quarter filling, showing also in this case the high accuracy of the LSDA in the moderate interaction regime. The one-dimensional data provided in this study may represent a useful benchmark to further develop DFT methods beyond the LSDA and they will hopefully motivate experimental studies to accurately measure the equation of state of Fermi gases in higher-dimensional geometries. Supplementary material in the form of one pdf file available from the Journal web page at http://https://doi.org/10.1140/epjb/e2018-90021-1.
Integrated geometry and grid generation system for complex configurations
NASA Technical Reports Server (NTRS)
Akdag, Vedat; Wulf, Armin
1992-01-01
A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.
Application of adobe flash media to optimize jigsaw learning model on geometry material
NASA Astrophysics Data System (ADS)
Imam, P.; Imam, S.; Ikrar, P.
2018-05-01
This study aims to determine and describe the effectiveness of the application of adobe flash media for jigsaw learning model on geometry material. In this study, the modified jigsaw learning with adobe flash media is called jigsaw-flash model. This research was conducted in Surakarta. The research method used is mix method research with exploratory sequential strategy. The results of this study indicate that students feel more comfortable and interested in studying geometry material taught by jigsaw-flash model. In addition, students taught using the jigsaw-flash model are more active and motivated than the students who were taught using ordinary jigsaw models. This shows that the use of the jigsaw-flash model can increase student participation and motivation. It can be concluded that the adobe flash media can be used as a solution to reduce the level of student abstraction in learning mathematics.
CAD Services: an Industry Standard Interface for Mechanical CAD Interoperability
NASA Technical Reports Server (NTRS)
Claus, Russell; Weitzer, Ilan
2002-01-01
Most organizations seek to design and develop new products in increasingly shorter time periods. At the same time, increased performance demands require a team-based multidisciplinary design process that may span several organizations. One approach to meet these demands is to use 'Geometry Centric' design. In this approach, design engineers team their efforts through one united representation of the design that is usually captured in a CAD system. Standards-based interfaces are critical to provide uniform, simple, distributed services that enable the 'Geometry Centric' design approach. This paper describes an industry-wide effort, under the Object Management Group's (OMG) Manufacturing Domain Task Force, to define interfaces that enable the interoperability of CAD, Computer Aided Manufacturing (CAM), and Computer Aided Engineering (CAE) tools. This critical link to enable 'Geometry Centric' design is called: Cad Services V1.0. This paper discusses the features of this standard and proposed application.
Stages as models of scene geometry.
Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark
2010-09-01
Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.
Song, Jiangning; Tan, Hao; Wang, Mingjun; Webb, Geoffrey I.; Akutsu, Tatsuya
2012-01-01
Protein backbone torsion angles (Phi) and (Psi) involve two rotation angles rotating around the Cα-N bond (Phi) and the Cα-C bond (Psi). Due to the planarity of the linked rigid peptide bonds, these two angles can essentially determine the backbone geometry of proteins. Accordingly, the accurate prediction of protein backbone torsion angle from sequence information can assist the prediction of protein structures. In this study, we develop a new approach called TANGLE (Torsion ANGLE predictor) to predict the protein backbone torsion angles from amino acid sequences. TANGLE uses a two-level support vector regression approach to perform real-value torsion angle prediction using a variety of features derived from amino acid sequences, including the evolutionary profiles in the form of position-specific scoring matrices, predicted secondary structure, solvent accessibility and natively disordered region as well as other global sequence features. When evaluated based on a large benchmark dataset of 1,526 non-homologous proteins, the mean absolute errors (MAEs) of the Phi and Psi angle prediction are 27.8° and 44.6°, respectively, which are 1% and 3% respectively lower than that using one of the state-of-the-art prediction tools ANGLOR. Moreover, the prediction of TANGLE is significantly better than a random predictor that was built on the amino acid-specific basis, with the p-value<1.46e-147 and 7.97e-150, respectively by the Wilcoxon signed rank test. As a complementary approach to the current torsion angle prediction algorithms, TANGLE should prove useful in predicting protein structural properties and assisting protein fold recognition by applying the predicted torsion angles as useful restraints. TANGLE is freely accessible at http://sunflower.kuicr.kyoto-u.ac.jp/~sjn/TANGLE/. PMID:22319565
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.C. Lyons, S.C. Jardin, and J.J. Ramos
2012-06-28
A new code, the Neoclassical Ion-Electron Solver (NIES), has been written to solve for stationary, axisymmetric distribution functions (f ) in the conventional banana regime for both ions and elec trons using a set of drift-kinetic equations (DKEs) with linearized Fokker-Planck-Landau collision operators. Solvability conditions on the DKEs determine the relevant non-adiabatic pieces of f (called h ). We work in a 4D phase space in which Ψ defines a flux surface, θ is the poloidal angle, v is the total velocity referenced to the mean flow velocity, and λ is the dimensionless magnetic moment parameter. We expand h inmore » finite elements in both v and λ . The Rosenbluth potentials, φ and ψ, which define the integral part of the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v . At each ψ , we solve a block tridiagonal system for hi (independent of fe ), then solve another block tridiagonal system for he (dependent on fi ). We demonstrate that such a formulation can be accurately and efficiently solved. NIES is coupled to the MHD equilibrium code JSOLVER [J. DeLucia, et al., J. Comput. Phys. 37 , pp 183-204 (1980).] allowing us to work with realistic magnetic geometries. The bootstrap current is calculated as a simple moment of the distribution function. Results are benchmarked against the Sauter analytic formulas and can be used as a kinetic closure for an MHD code (e.g., M3D-C1 [S.C. Jardin, et al ., Computational Science & Discovery, 4 (2012).]).« less
NASA Astrophysics Data System (ADS)
Fensin, Michael Lorne
Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Ray-tracing 3D dust radiative transfer with DART-Ray: code upgrade and public release
NASA Astrophysics Data System (ADS)
Natale, Giovanni; Popescu, Cristina C.; Tuffs, Richard J.; Clarke, Adam J.; Debattista, Victor P.; Fischera, Jörg; Pasetto, Stefano; Rushton, Mark; Thirlwall, Jordan J.
2017-11-01
We present an extensively updated version of the purely ray-tracing 3D dust radiation transfer code DART-Ray. The new version includes five major upgrades: 1) a series of optimizations for the ray-angular density and the scattered radiation source function; 2) the implementation of several data and task parallelizations using hybrid MPI+OpenMP schemes; 3) the inclusion of dust self-heating; 4) the ability to produce surface brightness maps for observers within the models in HEALPix format; 5) the possibility to set the expected numerical accuracy already at the start of the calculation. We tested the updated code with benchmark models where the dust self-heating is not negligible. Furthermore, we performed a study of the extent of the source influence volumes, using galaxy models, which are critical in determining the efficiency of the DART-Ray algorithm. The new code is publicly available, documented for both users and developers, and accompanied by several programmes to create input grids for different model geometries and to import the results of N-body and SPH simulations. These programmes can be easily adapted to different input geometries, and for different dust models or stellar emission libraries.
MHD Simulations of Plasma Dynamics with Non-Axisymmetric Boundaries
NASA Astrophysics Data System (ADS)
Hansen, Chris; Levesque, Jeffrey; Morgan, Kyle; Jarboe, Thomas
2015-11-01
The arbitrary geometry, 3D extended MHD code PSI-TET is applied to linear and non-linear simulations of MCF plasmas with non-axisymmetric boundaries. Progress and results from simulations on two experiments will be presented: 1) Detailed validation studies of the HIT-SI experiment with self-consistent modeling of plasma dynamics in the helicity injectors. Results will be compared to experimental data and NIMROD simulations that model the effect of the helicity injectors through boundary conditions on an axisymmetric domain. 2) Linear studies of HBT-EP with different wall configurations focusing on toroidal asymmetries in the adjustable conducting wall. HBT-EP studies the effect of active/passive stabilization with an adjustable ferritic wall. Results from linear verification and benchmark studies of ideal mode growth with and without toroidal asymmetries will be presented and compared to DCON predictions. Simulations of detailed experimental geometries are enabled by use of the PSI-TET code, which employs a high order finite element method on unstructured tetrahedral grids that are generated directly from CAD models. Further development of PSI-TET will also be presented including work to support resistive wall regions within extended MHD simulations. Work supported by DoE.
NASA Astrophysics Data System (ADS)
Large, Nicolas; Cao, Yang; Manjavacas, Alejandro; Nordlander, Peter
2015-03-01
Electron energy-loss spectroscopy (EELS) is a unique tool that is extensively used to investigate the plasmonic response of metallic nanostructures since the early works in the '50s. To be able to interpret and theoretically investigate EELS results, a myriad of different numerical techniques have been developed for EELS simulations (BEM, DDA, FEM, GDTD, Green dyadic functions). Although these techniques are able to predict and reproduce experimental results, they possess significant drawbacks and are often limited to highly symmetrical geometries, non-penetrating trajectories, small nanostructures, and free standing nanostructures. We present here a novel approach for EELS calculations using the Finite-difference time-domain (FDTD) method: EELS-FDTD. We benchmark our approach by direct comparison with results from the well-established boundary element method (BEM) and published experimental results. In particular, we compute EELS spectra for spherical nanoparticles, nanoparticle dimers, nanodisks supported by various substrates, and gold bowtie antennas on a silicon nitride substrate. Our EELS-FDTD implementation can be easily extended to more complex geometries and configurations and can be directly implemented within other numerical methods. Work funded by the Welch Foundation (C-1222, L-C-004), and the NSF (CNS-0821727, OCI-0959097).
Self-Consistent Optimization of Excited States within Density-Functional Tight-Binding.
Kowalczyk, Tim; Le, Khoa; Irle, Stephan
2016-01-12
We present an implementation of energies and gradients for the ΔDFTB method, an analogue of Δ-self-consistent-field density functional theory (ΔSCF) within density-functional tight-binding, for the lowest singlet excited state of closed-shell molecules. Benchmarks of ΔDFTB excitation energies, optimized geometries, Stokes shifts, and vibrational frequencies reveal that ΔDFTB provides a qualitatively correct description of changes in molecular geometries and vibrational frequencies due to excited-state relaxation. The accuracy of ΔDFTB Stokes shifts is comparable to that of ΔSCF-DFT, and ΔDFTB performs similarly to ΔSCF with the PBE functional for vertical excitation energies of larger chromophores where the need for efficient excited-state methods is most urgent. We provide some justification for the use of an excited-state reference density in the DFTB expansion of the electronic energy and demonstrate that ΔDFTB preserves many of the properties of its parent ΔSCF approach. This implementation fills an important gap in the extended framework of DFTB, where access to excited states has been limited to the time-dependent linear-response approach, and affords access to rapid exploration of a valuable class of excited-state potential energy surfaces.
Directions of arrival estimation with planar antenna arrays in the presence of mutual coupling
NASA Astrophysics Data System (ADS)
Akkar, Salem; Harabi, Ferid; Gharsallah, Ali
2013-06-01
Directions of arrival (DoAs) estimation of multiple sources using an antenna array is a challenging topic in wireless communication. The DoAs estimation accuracy depends not only on the selected technique and algorithm, but also on the geometrical configuration of the antenna array used during the estimation. In this article the robustness of common planar antenna arrays against unaccounted mutual coupling is examined and their DoAs estimation capabilities are compared and analysed through computer simulations using the well-known MUltiple SIgnal Classification (MUSIC) algorithm. Our analysis is based on an electromagnetic concept to calculate an approximation of the impedance matrices that define the mutual coupling matrix (MCM). Furthermore, a CRB analysis is presented and used as an asymptotic performance benchmark of the studied antenna arrays. The impact of the studied antenna arrays geometry on the MCM structure is also investigated. Simulation results show that the UCCA has more robustness against unaccounted mutual coupling and performs better results than both UCA and URA geometries. The performed simulations confirm also that, although the UCCA achieves better performance under complicated scenarios, the URA shows better asymptotic (CRB) behaviour which promises more accuracy on DoAs estimation.
CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 2
2007-02-01
article is to show that when an organization is already doing competent project management, the effort to benchmark that capability by using CMMI is...process-improvement evolution. by Watts S . Humphrey, Dr. Michael D. Konrad, James W. Over, and William C. Peterson The ImprovAbility Model This model helps...17 23 29 3 12 16 22 30 31 D ep ar t m e n t s From the Sponsor Call For Articles Ad More Online From CrossTalk Coming Events SSTC 2007 BackTalk CMMI
Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buntinas, D.; Mercier, G.; Gropp, W.
2005-12-02
This paper presents a new low-level communication subsystem called Nemesis. Nemesis has been designed and implemented to be scalable and efficient both in the intranode communication context using shared-memory and in the internode communication case using high-performance networks and is natively multimethod-enabled. Nemesis has been integrated in MPICH2 as a CH3 channel and delivers better performance than other dedicated communication channels in MPICH2. Furthermore, the resulting MPICH2 architecture outperforms other MPI implementations in point-to-point benchmarks.
Heliostat field cost reduction by `slope drive' optimization
NASA Astrophysics Data System (ADS)
Arbes, Florian; Weinrebe, Gerhard; Wöhrbach, Markus
2016-05-01
An algorithm to optimize power tower heliostat fields employing heliostats with so-called slope drives is presented. It is shown that a field using heliostats with the slope drive axes configuration has the same performance as a field with conventional azimuth-elevation tracking heliostats. Even though heliostats with the slope drive configuration have a limited tracking range, field groups of heliostats with different axes or different drives are not needed for different positions in the heliostat field. The impacts of selected parameters on a benchmark power plant (PS10 near Seville, Spain) are analyzed.
Investigations of Crossed Andreev Reflection in Hybrid Superconductor-Ferromagnet Structures
ERIC Educational Resources Information Center
Colci O'Hara, Madalina
2009-01-01
Cooper pair splitting is predicted to occur in hybrid devices where a superconductor is coupled to two ferromagnetic wires placed at a distance less than the superconducting coherence length. This thesis searches for signatures of this process, called crossed Andreev reflection (CAR), in three device geometries. The first devices studied are…
Addressing Misconceptions in Geometry through Written Error Analyses
ERIC Educational Resources Information Center
Kembitzky, Kimberle A.
2009-01-01
This study examined the improvement of students' comprehension of geometric concepts through analytical writing about their own misconceptions using a reflective tool called an ERNIe (acronym for ERror aNalyIsis). The purpose of this study was to determine whether the ERNIe process could be used to correct geometric misconceptions, as well as how…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rybicki, E.F.; Luiskutty, C.T.; Sutrick, J.S.
This User's Manual contains information for four fracture/proppant models. TUPROP1 contains a Geertsma and de Klerk type fracture model. The section of the program utilizing the proppant fracture geometry data from the pseudo three-dimensional highly elongated fracture model is called TUPROPC. The analogous proppant section of the program that was modified to accept fracture shape data from SA3DFRAC is called TUPROPS. TUPROPS also includes fracture closure. Finally there is the penny fracture and its proppant model, PENNPROP. In the first three chapters, the proppant sections are based on the same theory for determining the proppant distribution but have modifications tomore » support variable height fractures and modifications to accept fracture geometry from three different fracture models. Thus, information about each proppant model in the User's Manual builds on information supplied in the previous chapter. The exception to the development of combined treatment models is the penny fracture and its proppant model. In this case, a completely new proppant model was developed. A description of how to use the combined treatment model for the penny fracture is contained in Chapter 4. 2 refs.« less
NASA Technical Reports Server (NTRS)
Reznick, Steve
1988-01-01
Transonic Euler/Navier-Stokes computations are accomplished for wing-body flow fields using a computer program called Transonic Navier-Stokes (TNS). The wing-body grids are generated using a program called ZONER, which subdivides a coarse grid about a fighter-like aircraft configuration into smaller zones, which are tailored to local grid requirements. These zones can be either finely clustered for capture of viscous effects, or coarsely clustered for inviscid portions of the flow field. Different equation sets may be solved in the different zone types. This modular approach also affords the opportunity to modify a local region of the grid without recomputing the global grid. This capability speeds up the design optimization process when quick modifications to the geometry definition are desired. The solution algorithm embodied in TNS is implicit, and is capable of capturing pressure gradients associated with shocks. The algebraic turbulence model employed has proven adequate for viscous interactions with moderate separation. Results confirm that the TNS program can successfully be used to simulate transonic viscous flows about complicated 3-D geometries.
The Ramachandran Number: An Order Parameter for Protein Geometry
Mannige, Ranjan V.; Kundu, Joyjit; Whitelam, Stephen; ...
2016-08-04
Three-dimensional protein structures usually contain regions of local order, called secondary structure, such as α-helices and β-sheets. Secondary structure is characterized by the local rotational state of the protein backbone, quantified by two dihedral angles called Øand Ψ. Particular types of secondary structure can generally be described by a single (diffuse) location on a two-dimensional plot drawn in the space of the angles Ø andΨ, called a Ramachandran plot. By contrast, a recently-discovered nanomaterial made from peptoids, structural isomers of peptides, displays a secondary-structure motif corresponding to two regions on the Ramachandran plot [Mannige et al., Nature 526, 415 (2015)].more » In order to describe such 'higher-order' secondary structure in a compact way we introduce here a means of describing regions on the Ramachandran plot in terms of a single Ramachandran number, R, which is a structurally meaningful combination of Ø andΨ. We show that the potential applications of R are numerous: it can be used to describe the geometric content of protein structures, and can be used to draw diagrams that reveal, at a glance, the frequency of occurrence of regular secondary structures and disordered regions in large protein datasets. We propose that R might be used as an order parameter for protein geometry for a wide range of applications.« less
von Eiff, Wilfried
2015-01-01
Hospitals worldwide are facing the same opportunities and threats: the demographics of an aging population; steady increases in chronic diseases and severe illnesses; and a steadily increasing demand for medical services with more intensive treatment for multi-morbid patients. Additionally, patients are becoming more demanding. They expect high quality medicine within a dignity-driven and painless healing environment. The severe financial pressures that these developments entail oblige care providers to more and more cost-containment and to apply process reengineering, as well as continuous performance improvement measures, so as to achieve future financial sustainability. At the same time, regulators are calling for improved patient outcomes. Benchmarking and best practice management are successfully proven performance improvement tools for enabling hospitals to achieve a higher level of clinical output quality, enhanced patient satisfaction, and care delivery capability, while simultaneously containing and reducing costs. This chapter aims to clarify what benchmarking is and what it is not. Furthermore, it is stated that benchmarking is a powerful managerial tool for improving decision-making processes that can contribute to the above-mentioned improvement measures in health care delivery. The benchmarking approach described in this chapter is oriented toward the philosophy of an input-output model and is explained based on practical international examples from different industries in various countries. Benchmarking is not a project with a defined start and end point, but a continuous initiative of comparing key performance indicators, process structures, and best practices from best-in-class companies inside and outside industry. Benchmarking is an ongoing process of measuring and searching for best-in-class performance: Measure yourself with yourself over time against key performance indicators. Measure yourself against others. Identify best practices. Equal or exceed this best practice in your institution. Focus on simple and effective ways to implement solutions. Comparing only figures, such as average length of stay, costs of procedures, infection rates, or out-of-stock rates, can lead easily to wrong conclusions and decision making with often-disastrous consequences. Just looking at figures and ratios is not the basis for detecting potential excellence. It is necessary to look beyond the numbers to understand how processes work and contribute to best-in-class results. Best practices from even quite different industries can enable hospitals to leapfrog results in patient orientation, clinical excellence, and cost-effectiveness. Despite common benchmarking approaches, it is pointed out that a comparison without "looking behind the figures" (what it means to be familiar with the process structure, process dynamic and drivers, process institutions/rules and process-related incentive components) will be extremely limited referring to reliability and quality of findings. In order to demonstrate transferability of benchmarking results between different industries practical examples from health care, automotive, and hotel service have been selected. Additionally, it is depicted that international comparisons between hospitals providing medical services in different health care systems do have a great potential for achieving leapfrog results in medical quality, organization of service provision, effective work structures, purchasing and logistics processes, or management, etc.
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelbrot, B.B.
1991-03-01
The following statements are obviously quite wrong: oil fields are circular; they are the same size and are distributed uniformly throughout the world; soil is of uniform porosity and permeability; after water has been pumped into a field it seeps through as an underground sphere. The preceding statements are so grossly incorrect that they do not even provide useful first approximations that one could improve upon by adding so-called corrective terms. For example, one gains little by starting with the notion of a uniform distribution of oil fields and then assuming it is perturbed by small Gaussian scatter. The flowmore » of water in a porous medium often fingers out in a pattern so diffuse that a sphere is not a useful point of departure in describing it. In summary, even the simplest data underlying petroleum geology exhibit very gross irregularity and unevenness. Fractal geometry is the proper geometry of manageable irregularity, fragmentation, and unevenness. It is the only workable alternative between the excessive order of the Euclidean geometry and unmanageable disorder. The main features of fractal geometry will be described and several techniques will be pointed out that show promise for the petroleum geologist.« less
NASA Astrophysics Data System (ADS)
Asay-Davis, Xylar; Cornford, Stephen; Martin, Daniel; Gudmundsson, Hilmar; Holland, David; Holland, Denise
2015-04-01
The MISMIP and MISMIP3D marine ice sheet model intercomparison exercises have become popular benchmarks, and several modeling groups have used them to show how their models compare to both analytical results and other models. Similarly, the ISOMIP (Ice Shelf-Ocean Model Intercomparison Project) experiments have acted as a proving ground for ocean models with sub-ice-shelf cavities.As coupled ice sheet-ocean models become available, an updated set of benchmark experiments is needed. To this end, we propose sequel experiments, MISMIP+ and ISOMIP+, with an end goal of coupling the two in a third intercomparison exercise, MISOMIP (the Marine Ice Sheet-Ocean Model Intercomparison Project). Like MISMIP3D, the MISMIP+ experiments take place in an idealized, three-dimensional setting and compare full 3D (Stokes) and reduced, hydrostatic models. Unlike the earlier exercises, the primary focus will be the response of models to sub-shelf melting. The chosen configuration features an ice shelf that experiences substantial lateral shear and buttresses the upstream ice, and so is well suited to melting experiments. Differences between the steady states of each model are minor compared to the response to melt-rate perturbations, reflecting typical real-world applications where parameters are chosen so that the initial states of all models tend to match observations. The three ISOMIP+ experiments have been designed to to make use of the same bedrock topography as MISMIP+ and using ice-shelf geometries from MISMIP+ results produced by the BISICLES ice-sheet model. The first two experiments use static ice-shelf geometries to simulate the evolution of ocean dynamics and resulting melt rates to a quasi-steady state when far-field forcing changes in either from cold to warm or from warm to cold states. The third experiment prescribes 200 years of dynamic ice-shelf geometry (with both retreating and advancing ice) based on a BISICLES simulation along with similar flips between warm and cold states in the far-field ocean forcing. The MISOMIP experiment combines the MISMIP+ experiments with the third ISOMIP+ experiment. Changes in far-field ocean forcing lead to a rapid (over ~1-2 years) increase in sub-ice-shelf melting, which is allowed to drive ice-shelf retreat for ~100 years. Then, the far-field forcing is switched to a cold state, leading to a rapid decrease in melting and a subsequent advance over ~100 years. To illustrate, we present results from BISICLES and POP2x experiments for each of the three intercomparison exercises.
Center for Extended Magnetohydrodynamics Modeling - Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Scott
This project funding supported approximately 74 percent of a Ph.D. graduate student, not including costs of travel and supplies. We had a highly successful research project including the development of a second-order implicit electromagnetic kinetic ion hybrid model [Cheng 2013, Sturdevant 2016], direct comparisons with the extended MHD NIMROD code and kinetic simulation [Schnack 2013], modeling of slab tearing modes using the fully kinetic ion hybrid model and finally, modeling global tearing modes in cylindrical geometry using gyrokinetic simulation [Chen 2015, Chen 2016]. We developed an electromagnetic second-order implicit kinetic ion fluid electron hybrid model [Cheng 2013]. As a firstmore » step, we assumed isothermal electrons, but have included drift-kinetic electrons in similar models [Chen 2011]. We used this simulation to study the nonlinear evolution of the tearing mode in slab geometry, including nonlinear evolution and saturation [Cheng 2013]. Later, we compared this model directly to extended MHD calculations using the NIMROD code [Schnack 2013]. In this study, we investigated the ion-temperature-gradient instability with an extended MHD code for the first time and got reasonable agreement with the kinetic calculation in terms of linear frequency, growth rate and mode structure. We then extended this model to include orbit averaging and sub-cycling of the ions and compared directly to gyrokinetic theory [Sturdevant 2016]. This work was highlighted in an Invited Talk at the International Conference on the Numerical Simulation of Plasmas in 2015. The orbit averaging sub-cycling multi-scale algorithm is amenable to hybrid architectures with GPUS or math co-processors. Additionally, our participation in the Center for Extend Magnetohydrodynamics motivated our research on developing the capability for gyrokinetic simulation to model a global tearing mode. We did this in cylindrical geometry where the results could be benchmarked with existing eigenmode calculations. First, we developed a gyrokinetic code capable of simulating long wavelengths using a fluid electron model [Chen 2015]. We benchmarked this code with an eigenmode calculation. Besides having to rewrite the field solver due to the breakdown in the gyrokinetic ordering for long wavelengths, very high radial resolution was required. We developed a technique where we used the solution from the eigenmode solver to specify radial boundary conditions allowing for a very high radial resolution of the inner solution. Using this technique enabled us to use our direct algorithm with gyrokinetic ions and drift kinetic electrons [Chen 2016]. This work was highlighted in an Invited Talk at the American Physical Society - Division of Plasma Physics in 2015.« less
Multiplex visibility graphs to investigate recurrent neural network dynamics
NASA Astrophysics Data System (ADS)
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-03-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.
Multiplex visibility graphs to investigate recurrent neural network dynamics
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-01-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods. PMID:28281563
On the inverse problem of blade design for centrifugal pumps and fans
NASA Astrophysics Data System (ADS)
Kruyt, N. P.; Westra, R. W.
2014-06-01
The inverse problem of blade design for centrifugal pumps and fans has been studied. The solution to this problem provides the geometry of rotor blades that realize specified performance characteristics, together with the corresponding flow field. Here a three-dimensional solution method is described in which the so-called meridional geometry is fixed and the distribution of the azimuthal angle at the three-dimensional blade surface is determined for blades of infinitesimal thickness. The developed formulation is based on potential-flow theory. Besides the blade impermeability condition at the pressure and suction side of the blades, an additional boundary condition at the blade surface is required in order to fix the unknown blade geometry. For this purpose the mean-swirl distribution is employed. The iterative numerical method is based on a three-dimensional finite element method approach in which the flow equations are solved on the domain determined by the latest estimate of the blade geometry, with the mean-swirl distribution boundary condition at the blade surface being enforced. The blade impermeability boundary condition is then used to find an improved estimate of the blade geometry. The robustness of the method is increased by specific techniques, such as spanwise-coupled solution of the discretized impermeability condition and the use of under-relaxation in adjusting the estimates of the blade geometry. Various examples are shown that demonstrate the effectiveness and robustness of the method in finding a solution for the blade geometry of different types of centrifugal pumps and fans. The influence of the employed mean-swirl distribution on the performance characteristics is also investigated.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
The Use of Pro/Engineer CAD Software and Fishbowl Tool Kit in Ray-tracing Analysis
NASA Technical Reports Server (NTRS)
Nounu, Hatem N.; Kim, Myung-Hee Y.; Ponomarev, Artem L.; Cucinotta, Francis A.
2009-01-01
This document is designed as a manual for a user who wants to operate the Pro/ENGINEER (ProE) Wildfire 3.0 with the NASA Space Radiation Program's (SRP) custom-designed Toolkit, called 'Fishbowl', for the ray tracing of complex spacecraft geometries given by a ProE CAD model. The analysis of spacecraft geometry through ray tracing is a vital part in the calculation of health risks from space radiation. Space radiation poses severe risks of cancer, degenerative diseases and acute radiation sickness during long-term exploration missions, and shielding optimization is an important component in the application of radiation risk models. Ray tracing is a technique in which 3-dimensional (3D) vehicle geometry can be represented as the input for the space radiation transport code and subsequent risk calculations. In ray tracing a certain number of rays (on the order of 1000) are used to calculate the equivalent thickness, say of aluminum, of the spacecraft geometry seen at a point of interest called the dose point. The rays originate at the dose point and terminate at a homogenously distributed set of points lying on a sphere that circumscribes the spacecraft and that has its center at the dose point. The distance a ray traverses in each material is converted to aluminum or other user-selected equivalent thickness. Then all equivalent thicknesses are summed up for each ray. Since each ray points to a direction, the aluminum equivalent of each ray represents the shielding that the geometry provides to the dose point from that particular direction. This manual will first list for the user the contact information for help in installing ProE and Fishbowl in addition to notes on the platform support and system requirements information. Second, the document will show the user how to use the software to ray trace a Pro/E-designed 3-D assembly and will serve later as a reference for troubleshooting. The user is assumed to have previous knowledge of ProE and CAD modeling.
Photonic crystal geometry for organic solar cells.
Ko, Doo-Hyun; Tumbleston, John R; Zhang, Lei; Williams, Stuart; DeSimone, Joseph M; Lopez, Rene; Samulski, Edward T
2009-07-01
We report organic solar cells with a photonic crystal nanostructure embossed in the photoactive bulk heterojunction layer, a topography that exhibits a 3-fold enhancement of the absorption in specific regions of the solar spectrum in part through multiple excitation resonances. The photonic crystal geometry is fabricated using a materials-agnostic process called PRINT wherein highly ordered arrays of nanoscale features are readily made in a single processing step over wide areas (approximately 4 cm(2)) that is scalable. We show efficiency improvements of approximately 70% that result not only from greater absorption, but also from electrical enhancements. The methodology is generally applicable to organic solar cells and the experimental findings reported in our manuscript corroborate theoretical expectations.
Geometry of Optimal Paths around Focal Singular Surfaces in Differential Games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melikyan, Arik; Bernhard, Pierre
2005-06-15
We investigate a special type of singularity in non-smooth solutions of first-order partial differential equations, with emphasis on Isaacs' equation. This type, called focal manifold, is characterized by the incoming trajectory fields on the two sides and a discontinuous gradient. We provide a complete set of constructive equations under various hypotheses on the singularity, culminating with the case where no a priori hypothesis on its geometry is known, and where the extremal trajectory fields need not be collinear. We show two examples of differential games exhibiting non-collinear fields of extremal trajectories on the focal manifold, one with a transversal approachmore » and one with a tangential approach.« less
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085
Mechanisms of Pyroelectricity in Three- and Two-Dimensional Materials
NASA Astrophysics Data System (ADS)
Liu, Jian; Pantelides, Sokrates T.
2018-05-01
Pyroelectricity is a very promising phenomenon in three- and two-dimensional materials, but first-principles calculations have not so far been used to elucidate the underlying mechanisms. Here we report density-functional theory (DFT) calculations based on the Born-Szigeti theory of pyroelectricity, by combining fundamental thermodynamics and the modern theory of polarization. We find satisfactory agreement with experimental data in the case of bulk benchmark materials, showing that the so-called electron-phonon renormalization, whose contribution has been traditionally viewed as negligible, is important. We predict out-of-plane pyroelectricity in the recently synthesized Janus MoSSe monolayer and in-plane pyroelectricity in the group-IV monochalcogenide GeS monolayer. It is notable that the so-called secondary pyroelectricity is found to be dominant in GeS monolayer. The present work opens a theoretical route to study the pyroelectric effect using DFT and provides a valuable tool in the search for new candidates for pyroelectric applications.
Online Object Tracking, Learning and Parsing with And-Or Graphs.
Wu, Tianfu; Lu, Yang; Zhu, Song-Chun
2017-12-01
This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.
Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J
2006-01-01
Background There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Methods Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. Results A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. Conclusion The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions. PMID:16584553
Wildenhain, Jan; Spitzer, Michaela; Dolma, Sonam; Jarvik, Nick; White, Rachel; Roy, Marcia; Griffiths, Emma; Bellows, David S.; Wright, Gerard D.; Tyers, Mike
2016-01-01
The network structure of biological systems suggests that effective therapeutic intervention may require combinations of agents that act synergistically. However, a dearth of systematic chemical combination datasets have limited the development of predictive algorithms for chemical synergism. Here, we report two large datasets of linked chemical-genetic and chemical-chemical interactions in the budding yeast Saccharomyces cerevisiae. We screened 5,518 unique compounds against 242 diverse yeast gene deletion strains to generate an extended chemical-genetic matrix (CGM) of 492,126 chemical-gene interaction measurements. This CGM dataset contained 1,434 genotype-specific inhibitors, termed cryptagens. We selected 128 structurally diverse cryptagens and tested all pairwise combinations to generate a benchmark dataset of 8,128 pairwise chemical-chemical interaction tests for synergy prediction, termed the cryptagen matrix (CM). An accompanying database resource called ChemGRID was developed to enable analysis, visualisation and downloads of all data. The CGM and CM datasets will facilitate the benchmarking of computational approaches for synergy prediction, as well as chemical structure-activity relationship models for anti-fungal drug discovery. PMID:27874849
Area-to-point regression kriging for pan-sharpening
NASA Astrophysics Data System (ADS)
Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.
2016-04-01
Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.
High-performance electronic image stabilisation for shift and rotation correction
NASA Astrophysics Data System (ADS)
Parker, Steve C. J.; Hickman, D. L.; Wu, F.
2014-06-01
A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.
Jacobs, Stephen P; Parsons, Matthew; Rouse, Paul; Parsons, John; Gunderson-Reid, Michelle
2018-04-01
Service providers and funders need ways to work together to improve services. Identifying critical performance variables provides a mechanism by which funders can understand what they are purchasing without getting caught up in restrictive service specifications that restrict the ability of service providers to meet the needs of the clients. An implementation pathway and benchmarking programme called IN TOUCH provided contracted providers of home support and funders with a consistent methodology to follow when developing and implementing new restorative approaches for service delivery. Data from performance measurement was used to triangulate the personal and social worlds of the stakeholders enabling them to develop a shared understanding of what is working and what is not. The initial implementation of IN TOUCH involved five District Health Boards. The recursive dialogue encouraged by the IN TOUCH programme supports better and more sustainable service development because performance management is anchored to agreed data that has meaning to all stakeholders. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multi-Level Bitmap Indexes for Flash Memory Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Madduri, Kamesh; Canon, Shane
2010-07-23
Due to their low access latency, high read speed, and power-efficient operation, flash memory storage devices are rapidly emerging as an attractive alternative to traditional magnetic storage devices. However, tests show that the most efficient indexing methods are not able to take advantage of the flash memory storage devices. In this paper, we present a set of multi-level bitmap indexes that can effectively take advantage of flash storage devices. These indexing methods use coarsely binned indexes to answer queries approximately, and then use finely binned indexes to refine the answers. Our new methods read significantly lower volumes of data atmore » the expense of an increased disk access count, thus taking full advantage of the improved read speed and low access latency of flash devices. To demonstrate the advantage of these new indexes, we measure their performance on a number of storage systems using a standard data warehousing benchmark called the Set Query Benchmark. We observe that multi-level strategies on flash drives are up to 3 times faster than traditional indexing strategies on magnetic disk drives.« less
dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data.
Huynh-Thu, Vân Anh; Geurts, Pierre
2018-02-21
The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability.
Extensive sequencing of seven human genomes to characterize benchmark reference materials
Zook, Justin M.; Catoe, David; McDaniel, Jennifer; Vang, Lindsay; Spies, Noah; Sidow, Arend; Weng, Ziming; Liu, Yuling; Mason, Christopher E.; Alexander, Noah; Henaff, Elizabeth; McIntyre, Alexa B.R.; Chandramohan, Dhruva; Chen, Feng; Jaeger, Erich; Moshrefi, Ali; Pham, Khoa; Stedman, William; Liang, Tiffany; Saghbini, Michael; Dzakula, Zeljko; Hastie, Alex; Cao, Han; Deikus, Gintaras; Schadt, Eric; Sebra, Robert; Bashir, Ali; Truty, Rebecca M.; Chang, Christopher C.; Gulbahce, Natali; Zhao, Keyan; Ghosh, Srinka; Hyland, Fiona; Fu, Yutao; Chaisson, Mark; Xiao, Chunlin; Trow, Jonathan; Sherry, Stephen T.; Zaranek, Alexander W.; Ball, Madeleine; Bobe, Jason; Estep, Preston; Church, George M.; Marks, Patrick; Kyriazopoulou-Panagiotopoulou, Sofia; Zheng, Grace X.Y.; Schnall-Levin, Michael; Ordonez, Heather S.; Mudivarti, Patrice A.; Giorda, Kristina; Sheng, Ying; Rypdal, Karoline Bjarnesdatter; Salit, Marc
2016-01-01
The Genome in a Bottle Consortium, hosted by the National Institute of Standards and Technology (NIST) is creating reference materials and data for human genome sequencing, as well as methods for genome comparison and benchmarking. Here, we describe a large, diverse set of sequencing data for seven human genomes; five are current or candidate NIST Reference Materials. The pilot genome, NA12878, has been released as NIST RM 8398. We also describe data from two Personal Genome Project trios, one of Ashkenazim Jewish ancestry and one of Chinese ancestry. The data come from 12 technologies: BioNano Genomics, Complete Genomics paired-end and LFR, Ion Proton exome, Oxford Nanopore, Pacific Biosciences, SOLiD, 10X Genomics GemCode WGS, and Illumina exome and WGS paired-end, mate-pair, and synthetic long reads. Cell lines, DNA, and data from these individuals are publicly available. Therefore, we expect these data to be useful for revealing novel information about the human genome and improving sequencing technologies, SNP, indel, and structural variant calling, and de novo assembly. PMID:27271295
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divan, Deepak; Brumsickle, William; Eto, Joseph
2003-04-01
This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less
Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions
2015-01-01
We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less
Measurement of nonlinear refractive index and ionization rates in air using a wavefront sensor.
Schwarz, Jens; Rambo, Patrick; Kimmel, Mark; Atherton, Briggs
2012-04-09
A wavefront sensor has been used to measure the Kerr nonlinear focal shift of a high intensity ultrashort pulse beam in a focusing beam geometry while accounting for the effects of plasma-defocusing. It is shown that plasma-defocusing plays a major role in the nonlinear focusing dynamics and that measurements of Kerr nonlinearity and ionization are coupled. Furthermore, this coupled effect leads to a novel way that measures the laser ionization rates in air under atmospheric conditions as well as Kerr nonlinearity. The measured nonlinear index n₂ compares well with values found in the literature and the measured ionization rates could be successfully benchmarked to the model developed by Perelomov, Popov, and Terentev (PPT model) [Sov. Phys. JETP 50, 1393 (1966)].
NASA Astrophysics Data System (ADS)
Rasco, B. C.
2012-03-01
The Low-Energy Neutrino Spectroscopy (LENS) experiment will precisely measure the energy spectrum of low-energy solar neutrinos via charged-current neutrino reactions on indium. The LENS detector concept applies indium-loaded scintillator in an optically-segmented lattice geometry to achieve precise time and spatial resolution with unprecedented sensitivity for low-energy neutrino events. The LENS collaboration is currently developing prototypes that aim to demonstrate the performance and selectivity of the technology and to benchmark Monte Carlo simulations that will guide scaling to the full LENS instrument. Currently a 120 liter prototype, microLENS, is operating with pure scintillator (no indium loading) in the Kimballton Underground Research Facility (KURF). We will present results from initial measurements with microLENS and plans for a 400 liter prototype, miniLENS, using indium loaded scintillator that will be installed this summer.
Rational F-theory GUTs without exotics
NASA Astrophysics Data System (ADS)
Krippendorf, Sven; Peña, Damián Kaloni Mayorga; Oehlmann, Paul-Konstantin; Ruehle, Fabian
2014-07-01
We construct F-theory GUT models without exotic matter, leading to the MSSM matter spectrum with potential singlet extensions. The interplay of engineering explicit geometric setups, absence of four-dimensional anomalies, and realistic phenomenology of the couplings places severe constraints on the allowed local models in a given geometry. In constructions based on the spectral cover we find no model satisfying all these requirements. We then provide a survey of models with additional U(1) symmetries arising from rational sections of the elliptic fibration in toric constructions and obtain phenomenologically appealing models based on SU(5) tops. Furthermore we perform a bottom-up exploration beyond the toric section constructions discussed in the literature so far and identify benchmark models passing all our criteria, which can serve as a guideline for future geometric engineering.
NASA Technical Reports Server (NTRS)
Chen, C. P.
1990-01-01
An existing Computational Fluid Dynamics code for simulating complex turbulent flows inside a liquid rocket combustion chamber was validated and further developed. The Advanced Rocket Injector/Combustor Code (ARICC) is simplified and validated against benchmark flow situations for laminar and turbulent flows. The numerical method used in ARICC Code is re-examined for incompressible flow calculations. For turbulent flows, both the subgrid and the two equation k-epsilon turbulence models are studied. Cases tested include idealized Burger's equation in complex geometries and boundaries, a laminar pipe flow, a high Reynolds number turbulent flow, and a confined coaxial jet with recirculations. The accuracy of the algorithm is examined by comparing the numerical results with the analytical solutions as well as experimented data with different grid sizes.
Molecular adsorption on metal surfaces with van der Waals density functionals
NASA Astrophysics Data System (ADS)
Li, Guo; Tamblyn, Isaac; Cooper, Valentino R.; Gao, Hong-Jun; Neaton, Jeffrey B.
2012-03-01
The adsorption of 1,4-benzenediamine (BDA) on Au(111) and azobenzene on Ag(111) is investigated using density functional theory (DFT) with the nonlocal van der Waals density functional (vdW-DF) and the semilocal Perdew-Burke-Ernzerhof functional. For BDA on Au(111), the inclusion of London dispersion interactions not only dramatically enhances the molecule-substrate binding, resulting in adsorption energies consistent with experimental results, but also significantly alters the BDA binding geometry. For azobenzene on Ag(111), vdW-DFs produce superior adsorption energies compared to those obtained with other dispersion-corrected DFT approaches. These results provide evidence for the applicability of the vdW-DF approach and serve as practical benchmarks for the investigation of molecules adsorbed on noble-metal surfaces.
A comparative study of computational solutions to flow over a backward-facing step
NASA Technical Reports Server (NTRS)
Mizukami, M.; Georgiadis, N. J.; Cannon, M. R.
1993-01-01
A comparative study was conducted for computational fluid dynamic solutions to flow over a backward-facing step. This flow is a benchmark problem, with a simple geometry, but involves complicated flow physics such as free shear layers, reattaching flow, recirculation, and high turbulence intensities. Three Reynolds-averaged Navier-Stokes flow solvers with k-epsilon turbulence models were used, each using a different solution algorithm: finite difference, finite element, and hybrid finite element - finite difference. Comparisons were made with existing experimental data. Results showed that velocity profiles and reattachment lengths were predicted reasonably well by all three methods, while the skin friction coefficients were more difficult to predict accurately. It was noted that, in general, selecting an appropriate solver for each problem to be considered is important.
Parallel 3D-TLM algorithm for simulation of the Earth-ionosphere cavity
NASA Astrophysics Data System (ADS)
Toledo-Redondo, Sergio; Salinas, Alfonso; Morente-Molinera, Juan Antonio; Méndez, Antonio; Fornieles, Jesús; Portí, Jorge; Morente, Juan Antonio
2013-03-01
A parallel 3D algorithm for solving time-domain electromagnetic problems with arbitrary geometries is presented. The technique employed is the Transmission Line Modeling (TLM) method implemented in Shared Memory (SM) environments. The benchmarking performed reveals that the maximum speedup depends on the memory size of the problem as well as multiple hardware factors, like the disposition of CPUs, cache, or memory. A maximum speedup of 15 has been measured for the largest problem. In certain circumstances of low memory requirements, superlinear speedup is achieved using our algorithm. The model is employed to model the Earth-ionosphere cavity, thus enabling a study of the natural electromagnetic phenomena that occur in it. The algorithm allows complete 3D simulations of the cavity with a resolution of 10 km, within a reasonable timescale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Nityananda; Gadre, Shridhar R., E-mail: gadre@iitk.ac.in, E-mail: sotiris.xantheas@pnnl.gov; Rakshit, Avijit
2014-10-28
We report new global minimum candidate structures for the (H{sub 2}O){sub 25} cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2more » level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within <0.5 kcal/mol from the minimum geometries obtained from the MP2 optimization of the entire (H{sub 2}O){sub 25} cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra.« less
Modeling the evolution of a ramp-flat-ramp thrust system: A geological application of DynEarthSol2D
NASA Astrophysics Data System (ADS)
Feng, L.; Choi, E.; Bartholomew, M. J.
2013-12-01
DynEarthSol2D (available at http://bitbucket.org/tan2/dynearthsol2) is a robust, adaptive, two-dimensional finite element code that solves the momentum balance and the heat equation in Lagrangian form using unstructured meshes. Verified in a number of benchmark problems, this solver uses contingent mesh adaptivity in places where shear strain is focused (localization) and a conservative mapping assisted by marker particles to preserve phase and facies boundaries during remeshing. We apply this cutting-edge geodynamic modeling tool to the evolution of a thrust fault with a ramp-flat-ramp geometry. The overall geometry of the fault is constrained by observations in the northern part of the southern Appalachian fold and thrust belt. Brittle crust is treated as a Mohr-Coulomb plastic material. The thrust fault is a zone of a finite thickness but has a lower cohesion and friction angle than its surrounding rocks. When an intervening flat separates two distinct sequential ramps crossing different stratigraphic intervals, the thrust system will experience more complex deformations than those from a single thrust fault ramp. The resultant deformations associated with sequential ramps would exhibit a spectrum of styles, of which two end members correspond to ';overprinting' and ';interference'. Reproducing these end-member styles as well as intermediate ones, our models show that the relative importance of overprinting versus interference is a sensitive function of initial fault geometry and hanging wall displacement. We further present stress and strain histories extracted from the models. If clearly distinguishable, they will guide the interpretation of field observations on thrust faults.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Rapid B-rep model preprocessing for immersogeometric analysis using analytic surfaces
Wang, Chenglong; Xu, Fei; Hsu, Ming-Chen; Krishnamurthy, Adarsh
2017-01-01
Computational fluid dynamics (CFD) simulations of flow over complex objects have been performed traditionally using fluid-domain meshes that conform to the shape of the object. However, creating shape conforming meshes for complicated geometries like automobiles require extensive geometry preprocessing. This process is usually tedious and requires modifying the geometry, including specialized operations such as defeaturing and filling of small gaps. Hsu et al. (2016) developed a novel immersogeometric fluid-flow method that does not require the generation of a boundary-fitted mesh for the fluid domain. However, their method used the NURBS parameterization of the surfaces for generating the surface quadrature points to enforce the boundary conditions, which required the B-rep model to be converted completely to NURBS before analysis can be performed. This conversion usually leads to poorly parameterized NURBS surfaces and can lead to poorly trimmed or missing surface features. In addition, converting simple geometries such as cylinders to NURBS imposes a performance penalty since these geometries have to be dealt with as rational splines. As a result, the geometry has to be inspected again after conversion to ensure analysis compatibility and can increase the computational cost. In this work, we have extended the immersogeometric method to generate surface quadrature points directly using analytic surfaces. We have developed quadrature rules for all four kinds of analytic surfaces: planes, cones, spheres, and toroids. We have also developed methods for performing adaptive quadrature on trimmed analytic surfaces. Since analytic surfaces have frequently been used for constructing solid models, this method is also faster to generate quadrature points on real-world geometries than using only NURBS surfaces. To assess the accuracy of the proposed method, we perform simulations of a benchmark problem of flow over a torpedo shape made of analytic surfaces and compare those to immersogeometric simulations of the same model with NURBS surfaces. We also compare the results of our immersogeometric method with those obtained using boundary-fitted CFD of a tessellated torpedo shape, and quantities of interest such as drag coefficient are in good agreement. Finally, we demonstrate the effectiveness of our immersogeometric method for high-fidelity industrial scale simulations by performing an aerodynamic analysis of a truck that has a large percentage of analytic surfaces. Using analytic surfaces over NURBS avoids unnecessary surface type conversion and significantly reduces model-preprocessing time, while providing the same accuracy for the aerodynamic quantities of interest. PMID:29051678
Identification of vortexes obstructing the dynamo mechanism in laboratory experiments
NASA Astrophysics Data System (ADS)
Limone, A.; Hatch, D. R.; Forest, C. B.; Jenko, F.
2013-06-01
The magnetohydrodynamic dynamo effect explains the generation of self-sustained magnetic fields in electrically conducting flows, especially in geo- and astrophysical environments. Yet the details of this mechanism are still unknown, e.g., how and to which extent the geometry, the fluid topology, the forcing mechanism, and the turbulence can have a negative effect on this process. We report on numerical simulations carried out in spherical geometry, analyzing the predicted velocity flow with the so-called singular value decomposition, a powerful technique that allows us to precisely identify vortexes in the flow which would be difficult to characterize with conventional spectral methods. We then quantify the contribution of these vortexes to the growth rate of the magnetic energy in the system. We identify an axisymmetric vortex, whose rotational direction changes periodically in time, and whose dynamics are decoupled from those of the large scale background flow, that is detrimental for the dynamo effect. A comparison with experiments is carried out, showing that similar dynamics were observed in cylindrical geometry. These previously unexpected eddies, which impede the dynamo effect, offer an explanation for the experimental difficulties in attaining a dynamo in spherical geometry.
Wang, Peng; Zhu, Zhouquan; Huang, Shuai
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
Zhu, Zhouquan
2013-01-01
This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions. PMID:24385879
Damsel: A Data Model Storage Library for Exascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koziol, Quincey
The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.
Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry
NASA Technical Reports Server (NTRS)
Brandis, A. M.; Cruden, B. A.
2017-01-01
Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.
Incremental cost effectiveness evaluation in clinical research.
Krummenauer, Frank; Landwehr, I
2005-01-28
The health economic evaluation of therapeutic and diagnostic strategies is of increasing importance in clinical research. Therefore also clinical trialists have to involve health economic aspects more frequently. However, whereas they are quite familiar with classical effect measures in clinical trials, the corresponding parameters in health economic evaluation of therapeutic and diagnostic procedures are still not this common. The concepts of incremental cost effectiveness ratios (ICERs) and incremental net health benefit (INHB) will be illustrated and contrasted along the cost effectiveness evaluation of cataract surgery with monofocal and multifocal intraocular lenses. ICERs relate the costs of a treatment to its clinical benefit in terms of a ratio expression (indexed as Euro per clinical benefit unit). Therefore ICERs can be directly compared to a pre-specified willingness to pay (WTP) benchmark, which represents the maximum costs, health insurers would invest to achieve one clinical benefit unit. INHBs estimate a treatment's net clinical benefit after accounting for its cost increase versus an established therapeutic standard. Resource allocation rules can be formulated by means of both effect measures. Both the ICER and the INHB approach enable the definition of directional resource allocation rules. The allocation decisions arising from these rules are identical, as long as the willingness to pay benchmark is fixed in advance. Therefore both strategies crucially call for a priori determination of both the underlying clinical benefit endpoint (such as gain in vision lines after cataract surgery or gain in quality-adjusted life years) and the corresponding willingness to pay benchmark. The use of incremental cost effectiveness and net health benefit estimates provides a rationale for health economic allocation discussions and founding decisions. It implies the same requirements on trial protocols as yet established for clinical trials, that is the a priori definition of primary hypotheses (formulated as an allocation rule involving a pre-specified willingness to pay benchmark) and the primary clinical benefit endpoint (as a rationale for effectiveness evaluation).
Geometry of the perceptual space
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John
1999-09-01
The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.
Controlling Kink Geometry in Nanowires Fabricated by Alternating Metal-Assisted Chemical Etching.
Chen, Yun; Li, Liyi; Zhang, Cheng; Tuan, Chia-Chi; Chen, Xin; Gao, Jian; Wong, Ching-Ping
2017-02-08
Kinked silicon (Si) nanowires (NWs) have many special properties that make them attractive for a number of applications, such as microfluidics devices, microelectronic devices, and biosensors. However, fabricating NWs with controlled three-dimensional (3D) geometry has been challenging. In this work, a novel method called alternating metal-assisted chemical etching is reported for the fabrication of kinked Si NWs with controlled 3D geometry. By the use of multiple etchants with carefully selected composition, one can control the number of kinks, their locations, and their angles by controlling the number of etchant alternations and the time in each etchant. The resulting number of kinks equals the number times the etchant is alternated, the length of each segment separated by kinks has a linear relationship with the etching time, and the kinking angle is related to the surface tension and viscosity of the etchants. This facile method may provide a feasible and economical way to fabricate novel silicon nanowires, nanostructures, and devices for broad applications.
Geometry Calibration of the SVT in the CLAS12 Detector
NASA Astrophysics Data System (ADS)
Davies, Peter; Gilfoyle, Gerard
2016-09-01
A new detector called CLAS12 is being built in Hall B as part of the 12 GeV Upgrade at Jefferson Lab to learn how quarks and gluons form nuclei. The Silicon Vertex Tracker (SVT) is one of the subsystems designed to track the trajectory of charged particles as they are emitted from the target at large angles. The sensors of the SVT consist of long, narrow, strips embedded in a silicon substrate. There are 256 strips in a sensor, with a stereo angle of 0 -3° degrees. The location of the strips must be known to a precision of a few microns in order to accurately reconstruct particle tracks with the required resolution of 50-60 microns. Our first step toward achieving this resolution was to validate the nominal geometry relative to the design specification. We also resolved differences between the design and the CLAS12, Geant4-based simulation code GEMC. We developed software to apply alignment shifts to the nominal design geometry from a survey of fiducial points on the structure that supports each sensor. The final geometry will be generated by a common package written in JAVA to ensure consistency between the simulation and Reconstruction codes. The code will be tested by studying the impact of known distortions of the nominal geometry in simulation. Work supported by the Univeristy of Richmond and the US Department of Energy.
A Railway Track Geometry Measuring Trolley System Based on Aided INS
Chen, Qijin; Niu, Xiaoji; Zuo, Lili; Zhang, Tisheng; Xiao, Fuqin; Liu, Yi; Liu, Jingnan
2018-01-01
Accurate measurement of the railway track geometry is a task of fundamental importance to ensure the track quality in both the construction phase and the regular maintenance stage. Conventional track geometry measuring trolleys (TGMTs) in combination with classical geodetic surveying apparatus such as total stations alone cannot meet the requirements of measurement accuracy and surveying efficiency at the same time. Accurate and fast track geometry surveying applications call for an innovative surveying method that can measure all or most of the track geometric parameters in short time without interrupting the railway traffic. We provide a novel solution to this problem by integrating an inertial navigation system (INS) with a geodetic surveying apparatus, and design a modular TGMT system based on aided INS, which can be configured according to different surveying tasks including precise adjustment of slab track, providing tamping measurements, measuring track deformation and irregularities, and determination of the track axis. TGMT based on aided INS can operate in mobile surveying mode to significantly improve the surveying efficiency. Key points in the design of the TGMT’s architecture and the data processing concept and workflow are introduced in details, which should benefit subsequent research and provide a reference for the implementation of this kind of TGMT. The surveying performance of proposed TGMT with different configurations is assessed in the track geometry surveying experiments and actual projects. PMID:29439423
Structural Benchmark Testing for Stirling Convertor Heater Heads
NASA Technical Reports Server (NTRS)
Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.
2007-01-01
The National Aeronautics and Space Administration (NASA) has identified high efficiency Stirling technology for potential use on long duration Space Science missions such as Mars rovers, deep space missions, and lunar applications. For the long life times required, a structurally significant design limit for the Stirling convertor heater head is creep deformation induced even under relatively low stress levels at high material temperatures. Conventional investigations of creep behavior adequately rely on experimental results from uniaxial creep specimens, and much creep data is available for the proposed Inconel-718 (IN-718) and MarM-247 nickel-based superalloy materials of construction. However, very little experimental creep information is available that directly applies to the atypical thin walls, the specific microstructures, and the low stress levels. In addition, the geometry and loading conditions apply multiaxial stress states on the heater head components, far from the conditions of uniaxial testing. For these reasons, experimental benchmark testing is underway to aid in accurately assessing the durability of Stirling heater heads. The investigation supplements uniaxial creep testing with pneumatic testing of heater head test articles at elevated temperatures and with stress levels ranging from one to seven times design stresses. This paper presents experimental methods, results, post-test microstructural analyses, and conclusions for both accelerated and non-accelerated tests. The Stirling projects use the results to calibrate deterministic and probabilistic analytical creep models of the heater heads to predict their life times.
Benchmarking gyrokinetic simulations in a toroidal flux-tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.; Parker, S. E.; Wan, W.
2013-09-15
A flux-tube model is implemented in the global turbulence code GEM [Y. Chen and S. E. Parker, J. Comput. Phys. 220, 839 (2007)] in order to facilitate benchmarking with Eulerian codes. The global GEM assumes the magnetic equilibrium to be completely given. The initial flux-tube implementation simply selects a radial location as the center of the flux-tube and a radial size of the flux-tube, sets all equilibrium quantities (B, ∇B, etc.) to be equal to the values at the center of the flux-tube, and retains only a linear radial profile of the safety factor needed for boundary conditions. This implementationmore » shows disagreement with Eulerian codes in linear simulations. An alternative flux-tube model based on a complete local equilibrium solution of the Grad-Shafranov equation [J. Candy, Plasma Phys. Controlled Fusion 51, 105009 (2009)] is then implemented. This results in better agreement between Eulerian codes and the particle-in-cell (PIC) method. The PIC algorithm based on the v{sub ||}-formalism [J. Reynders, Ph.D. dissertation, Princeton University, 1992] and the gyrokinetic ion/fluid electron hybrid model with kinetic electron closure [Y. Chan and S. E. Parker, Phys. Plasmas 18, 055703 (2011)] are also implemented in the flux-tube geometry and compared with the direct method for both the ion temperature gradient driven modes and the kinetic ballooning modes.« less
Schultz, Nathan E; Gherman, Benjamin F; Cramer, Christopher J; Truhlar, Donald G
2006-11-30
Electrode poisoning by CO is a major concern in fuel cells. As interest in applying computational methods to electrochemistry is increasing, it is important to understand the levels of theory required for reliable treatments of metal-CO interactions. In this paper we justify the use of relativistic effective core potentials for the treatment of PdCO and hence, by inference, for metal-CO interactions where the predominant bonding mechanism is charge transfer. We also sort out key issues involving basis sets and we recommend that bond energies of 17.2, 43.3, and 69.4 kcal/mol be used as the benchmark bond energy for dissociation of Pd2 into Pd atoms, PdCO into Pd and CO, and Pd2CO into Pd2 and CO, respectively. We calculated the dipole moments of PdCO and Pd2CO, and we recommend benchmark values of 2.49 and 2.81 D, respectively. Furthermore, we tested 27 density functionals for this system and found that only hybrid density functionals can qualitatively and quantitatively predict the nature of the sigma-donation/pi-back-donation mechanism that is associated with the Pd-CO and Pd2-CO bonds. The most accurate density functionals for the systems tested in this paper are O3LYP, OLYP, PW6B95, and PBEh.
Does Watching "Do the Math" Affect Self-Efficacy and Achievement in Mathematics?
ERIC Educational Resources Information Center
Cavazos, Blanca Guadalupe
2014-01-01
"Do The Math," a 1-hour, live, educational television program provides on-air instruction in general math, geometry, pre-algebra and algebra to a target audience of 4th-12th graders. A team of math teachers also provides tutoring to students who call in for help with homework. The purpose of this study was to investigate whether watching…
Adapting Instruction to Individuals: Based on the Evidence, What Should It Mean?
ERIC Educational Resources Information Center
Lalley, James P.; Gentile, J. Ronald
2008-01-01
We examine the argument that teaching will be more effective if adapted to individuals--what we call the interaction/adaptation hypothesis. What is likely correct about this hypothesis (but needs more research) is that modality of instruction may need to be adapted to certain types of content (e.g., geometry vs. literature) or to domain of…
Ghost imaging of phase objects with classical incoherent light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirai, Tomohiro; Setaelae, Tero; Friberg, Ari T.
2011-10-15
We describe an optical setup for performing spatial Fourier filtering in ghost imaging with classical incoherent light. This is achieved by a modification of the conventional geometry for lensless ghost imaging. It is shown on the basis of classical coherence theory that with this technique one can realize what we call phase-contrast ghost imaging to visualize pure phase objects.
Bridging the Divide--Seeing Mathematics in the World through Dynamic Geometry
ERIC Educational Resources Information Center
Aydin, Hatice; Monaghan, John
2011-01-01
In TMA, Oldknow (2009, "TEAMAT", 28, 180-195) called for ways to unlock students' skills so that they increase learning about the world of mathematics and the objects in the world around them. This article examines one way in which we may unlock the student skills. We are currently exploring the potential for students to "see" mathematics in the…
Estimating the Earth's geometry, rotation and gravity field using a multi-satellite SLR solution
NASA Astrophysics Data System (ADS)
Stefka, V.; Blossfeld, M.; Mueller, H.; Gerstl, M.; Panafidina, N.
2012-12-01
Satellite Laser Ranging (SLR) is the unique technique to determine station coordinates, Earth Orientation Parameter (EOP) and Stokes coefficients of the Earth's gravity field in one common adjustment. These parameters form the so called "three pillars" (Plag & Pearlman, 2009) of the Global Geodetic Observing System (GGOS). In its function as official analysis center of the International Laser Ranging Service (ILRS), DGFI is developing and maintaining software to process SLR observations called "DGFI Orbit and Geodetic parameter estimation Software" (DOGS). The software is used to analyze SLR observations and to compute multi-satellite solutions. To take benefit of different orbit performances (e.g. inclination and altitude), a solution using ten different spherical satellites (ETALON1/2, LAGEOS1/2, STELLA, STARLETTE, AJISAI, LARETS, LARES, BLITS) covering the period of 12 years of observations is computed. The satellites are relatively weighted using a variance component estimation (VCE). The obtained weights are analyzed w.r.t. the potential of the satellite to monitor changes in the Earths geometry, rotation and gravity field. The estimated parameters (station coordinates and EOP) are validated w.r.t. official time series of the IERS. The Stokes coefficients are compared to recent gravity field solutions.
Hardware accelerated high performance neutron transport computation based on AGENT methodology
NASA Astrophysics Data System (ADS)
Xiao, Shanjie
The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.
Analysis of Low-Speed Stall Aerodynamics of a Swept Wing with Laminar-Flow Glove
NASA Technical Reports Server (NTRS)
Bui, Trong T.
2014-01-01
Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) analysis was conducted to study the low-speed stall aerodynamics of a GIII aircraft's swept wing modified with a laminar-flow wing glove. The stall aerodynamics of the gloved wing were analyzed and compared with the unmodified wing for the flight speed of 120 knots and altitude of 2300 ft above mean sea level (MSL). The Star-CCM+ polyhedral unstructured CFD code was first validated for wing stall predictions using the wing-body geometry from the First American Institute of Aeronautics and Astronautics (AIAA) CFD High-Lift Prediction Workshop. It was found that the Star-CCM+ CFD code can produce results that are within the scattering of other CFD codes considered at the workshop. In particular, the Star-CCM+ CFD code was able to predict wing stall for the AIAA wing-body geometry to within 1 degree of angle of attack as compared to benchmark wind-tunnel test data. Current results show that the addition of the laminar-flow wing glove causes the gloved wing to stall much earlier than the unmodified wing. Furthermore, the gloved wing has a different stall characteristic than the clean wing, with no sharp lift drop-off at stall for the gloved wing.
Analysis of Low Speed Stall Aerodynamics of a Swept Wing with Laminar Flow Glove
NASA Technical Reports Server (NTRS)
Bui, Trong T.
2014-01-01
Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) analysis was conducted to study the low-speed stall aerodynamics of a GIII aircraft's swept wing modified with a laminar-flow wing glove. The stall aerodynamics of the gloved wing were analyzed and compared with the unmodified wing for the flight speed of 120 knots and altitude of 2300 ft above mean sea level (MSL). The Star-CCM+ polyhedral unstructured CFD code was first validated for wing stall predictions using the wing-body geometry from the First American Institute of Aeronautics and Astronautics (AIAA) CFD High-Lift Prediction Workshop. It was found that the Star-CCM+ CFD code can produce results that are within the scattering of other CFD codes considered at the workshop. In particular, the Star-CCM+ CFD code was able to predict wing stall for the AIAA wing-body geometry to within 1 degree of angle of attack as compared to benchmark wind-tunnel test data. Current results show that the addition of the laminar-flow wing glove causes the gloved wing to stall much earlier than the unmodified wing. Furthermore, the gloved wing has a different stall characteristic than the clean wing, with no sharp lift drop-off at stall for the gloved wing.
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
NASA Astrophysics Data System (ADS)
Gabriel, Alice; Pelties, Christian
2014-05-01
In this presentation we will demonstrate the benefits of using modern numerical methods to support physic-based ground motion modeling and research. For this purpose, we utilize SeisSol an arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) scheme to solve the spontaneous rupture problem with high-order accuracy in space and time using three-dimensional unstructured tetrahedral meshes. We recently verified the method in various advanced test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite, including branching and dipping fault systems, heterogeneous background stresses, bi-material faults and rate-and-state friction constitutive formulations. Now, we study the dynamic rupture process using 3D meshes of fault systems constructed from geological and geophysical constraints, such as high-resolution topography, 3D velocity models and fault geometries. Our starting point is a large scale earthquake dynamic rupture scenario based on the 1994 Northridge blind thrust event in Southern California. Starting from this well documented and extensively studied event, we intend to understand the ground-motion, including the relevant high frequency content, generated from complex fault systems and its variation arising from various physical constraints. For example, our results imply that the Northridge fault geometry favors a pulse-like rupture behavior.
NASA Astrophysics Data System (ADS)
Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie
2018-05-01
A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.
The linear tearing instability in three dimensional, toroidal gyro-kinetic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornsby, W. A., E-mail: william.hornsby@ipp.mpg.de; Migliano, P.; Buchholz, R.
2015-02-15
Linear gyro-kinetic simulations of the classical tearing mode in three-dimensional toroidal geometry were performed using the global gyro-kinetic turbulence code, GKW. The results were benchmarked against a cylindrical ideal MHD and analytical theory calculations. The stability, growth rate, and frequency of the mode were investigated by varying the current profile, collisionality, and the pressure gradients. Both collisionless and semi-collisional tearing modes were found with a smooth transition between the two. A residual, finite, rotation frequency of the mode even in the absence of a pressure gradient is observed, which is attributed to toroidal finite Larmor-radius effects. When a pressure gradientmore » is present at low collisionality, the mode rotates at the expected electron diamagnetic frequency. However, the island rotation reverses direction at high collisionality. The growth rate is found to follow a η{sup 1∕7} scaling with collisional resistivity in the semi-collisional regime, closely following the semi-collisional scaling found by Fitzpatrick. The stability of the mode closely follows the stability analysis as performed by Hastie et al. using the same current and safety factor profiles but for cylindrical geometry, however, here a modification due to toroidal coupling and pressure effects is seen.« less
Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie
2018-05-14
A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.
NASA Astrophysics Data System (ADS)
Sambasivan, Shiv Kumar; Shashkov, Mikhail J.; Burton, Donald E.
2013-03-01
A finite volume cell-centered Lagrangian formulation is presented for solving large deformation problems in cylindrical axisymmetric geometries. Since solid materials can sustain significant shear deformation, evolution equations for stress and strain fields are solved in addition to mass, momentum and energy conservation laws. The total strain-rate realized in the material is split into an elastic and plastic response. The elastic and plastic components in turn are modeled using hypo-elastic theory. In accordance with the hypo-elastic model, a predictor-corrector algorithm is employed for evolving the deviatoric component of the stress tensor. A trial elastic deviatoric stress state is obtained by integrating a rate equation, cast in the form of an objective (Jaumann) derivative, based on Hooke's law. The dilatational response of the material is modeled using an equation of state of the Mie-Grüneisen form. The plastic deformation is accounted for via an iterative radial return algorithm constructed from the J2 von Mises yield condition. Several benchmark example problems with non-linear strain hardening and thermal softening yield models are presented. Extensive comparisons with representative Eulerian and Lagrangian hydrocodes in addition to analytical and experimental results are made to validate the current approach.
Paton, Robert S; Goodman, Jonathan M
2009-04-01
We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
The Computerized Anatomical Man (CAM) model
NASA Technical Reports Server (NTRS)
Billings, M. P.; Yucker, W. R.
1973-01-01
A computerized anatomical man (CAM) model, representing the most detailed and anatomically correct geometrical model of the human body yet prepared, has been developed for use in analyzing radiation dose distribution in man. This model of a 50-percentile standing USAF man comprises some 1100 unique geometric surfaces and some 2450 solid regions. Internal body geometry such as organs, voids, bones, and bone marrow are explicitly modeled. A computer program called CAMERA has also been developed for performing analyses with the model. Such analyses include tracing rays through the CAM geometry, placing results on magnetic tape in various forms, collapsing areal density data from ray tracing information to areal density distributions, preparing cross section views, etc. Numerous computer drawn cross sections through the CAM model are presented.
NASA Technical Reports Server (NTRS)
Richardson, R. W.
1974-01-01
Spectroscopic measurements were carried out on the NASA Lewis Bumpy Torus experiment in which a steady state ion heating method based on the modified Penning discharge is applied in a bumpy torus confinement geometry. Electron temperatures in pure helium are measured from the ratio of spectral line intensities. Measured electron temperatures range from 10 to 100 eV. Relative electron densities are also measured over the range of operating conditions. Radial profiles of temperature and relative density are measured in the two basic modes of operation of the device called the low and high pressure modes. The electron temperatures are used to estimate particle confinement times based on a steady state particle balance.
Emergence of Soft Communities from Geometric Preferential Attachment
Zuev, Konstantin; Boguñá, Marián; Bianconi, Ginestra; Krioukov, Dmitri
2015-01-01
All real networks are different, but many have some structural properties in common. There seems to be no consensus on what the most common properties are, but scale-free degree distributions, strong clustering, and community structure are frequently mentioned without question. Surprisingly, there exists no simple generative mechanism explaining all the three properties at once in growing networks. Here we show how latent network geometry coupled with preferential attachment of nodes to this geometry fills this gap. We call this mechanism geometric preferential attachment (GPA), and validate it against the Internet. GPA gives rise to soft communities that provide a different perspective on the community structure in networks. The connections between GPA and cosmological models, including inflation, are also discussed. PMID:25923110
From sine-Gordon to vacuumless systems in flat and curved spacetimes
NASA Astrophysics Data System (ADS)
Bazeia, D.; Moreira, D. C.
2017-12-01
In this work we start from the Higgs prototype model to introduce a new model, which makes a smooth transition between systems with well-located minima and systems that support no minima at all. We implement this possibility using the deformation procedure, which allows the obtaining a sine-Gordon-like model, controlled by a real parameter that gives rise to a family of models, reproducing the sine-Gordon and the so-called vacuumless models. We also study the thick brane scenarios associated with these models and investigate their stability and renormalization group flow. In particular, it is shown how gravity can change from the 5-dimensional warped geometry with a single extra dimension of infinite extent to the conventional 5-dimensional Minkowski geometry.
Generalized -deformed correlation functions as spectral functions of hyperbolic geometry
NASA Astrophysics Data System (ADS)
Bonora, L.; Bytsenko, A. A.; Guimarães, M. E. X.
2014-08-01
We analyze the role of vertex operator algebra and 2d amplitudes from the point of view of the representation theory of infinite-dimensional Lie algebras, MacMahon and Ruelle functions. By definition p-dimensional MacMahon function, with , is the generating function of p-dimensional partitions of integers. These functions can be represented as amplitudes of a two-dimensional c = 1 CFT, and, as such, they can be generalized to . With some abuse of language we call the latter amplitudes generalized MacMahon functions. In this paper we show that generalized p-dimensional MacMahon functions can be rewritten in terms of Ruelle spectral functions, whose spectrum is encoded in the Patterson-Selberg function of three-dimensional hyperbolic geometry.
Mechanics and Physics of Solids, Uncertainy, and the Archetype-Genome Exemplar
NASA Astrophysics Data System (ADS)
Greene, M. Steven
This dissertation argues that the mechanics and physics of solids rely on a fundamental exemplar: the apparent properties of a system depend on the building blocks that comprise it. Building blocks are referred to as archetypes and apparent system properties as the system genome. Three entities are of importance: the archetype properties, the conformation of archetypes, and the properties of interactions activated by that conformation. The combination of these entities into the system genome is called assembly. To show the utility of the archetype-genome exemplar, the dissertation presents the mathematical construction and computational implementation of a new theory for solid mechanics that is a continuum manifestation of the assembly process. The so-called archetype-blending continuum theory aligns the form of globally valid balance laws with physics evolving in a material's composite constitutive response so that, by rethinking conventional micromechanics, the theory accounts naturally for each piece of the genome assembly triplet: archetypes, interactions, and their conformation. With the pieces of the triplet isolated in the theory, materials genome design concepts that separately control microstructure and property may be gleaned from exploration of the constitutive parameter space. A suite of simulations that apply the new theory to polymer nanocomposite materials demonstrate the ability of the theory to predict a robust material genome that includes damping properties, modulus weakening, local strain amplification, and size effects. The dissertation also presents a theoretical assessment of the importance of uncertainty propagation in the archetype-genome exemplar. The findings from a set of computational experiments on instances of a general class of microstructured materials suggest that when overlap occurs between the size of the system geometry and the features of the conformation, material genomes become less certain. Increasing nonuniformity of boundary conditions and the size of random field correlation lengths exacerbate this conclusion. These criteria are combined into a scalar metric used to assess the impact of archetype-level uncertainties on the material genome for general scenarios in solid mechanics. Exemplary benchmark problems include bending in elastoplasticity and instability-induced pattern transition in porous elastomer. The contributions of this dissertation are threefold: (1) the mathematical construction of a new continuum theory for mechanics and physics of solids, (2) implementation of the theory, and (3) theoretical assessment of the scenarios in which material genomes deviate from determinism.
Comprehensive benchmarking of SNV callers for highly admixed tumor data
Bohnert, Regina; Vivas, Sonia
2017-01-01
Precision medicine attempts to individualize cancer therapy by matching tumor-specific genetic changes with effective targeted therapies. A crucial first step in this process is the reliable identification of cancer-relevant variants, which is considerably complicated by the impurity and heterogeneity of clinical tumor samples. We compared the impact of admixture of non-cancerous cells and low somatic allele frequencies on the sensitivity and precision of 19 state-of-the-art SNV callers. We studied both whole exome and targeted gene panel data and up to 13 distinct parameter configurations for each tool. We found vast differences among callers. Based on our comprehensive analyses we recommend joint tumor-normal calling with MuTect, EBCall or Strelka for whole exome somatic variant calling, and HaplotypeCaller or FreeBayes for whole exome germline calling. For targeted gene panel data on a single tumor sample, LoFreqStar performed best. We further found that tumor impurity and admixture had a negative impact on precision, and in particular, sensitivity in whole exome experiments. At admixture levels of 60% to 90% sometimes seen in pathological biopsies, sensitivity dropped significantly, even when variants were originally present in the tumor at 100% allele frequency. Sensitivity to low-frequency SNVs improved with targeted panel data, but whole exome data allowed more efficient identification of germline variants. Effective somatic variant calling requires high-quality pathological samples with minimal admixture, a consciously selected sequencing strategy, and the appropriate variant calling tool with settings optimized for the chosen type of data. PMID:29020110
On the Importance of Solar Eclipse Geometry in the Interpretation of Ionospheric Observations
NASA Astrophysics Data System (ADS)
Stankov, S.; Verhulst, T. G. W.
2017-12-01
A reliable interpretation of solar eclipse effects on the geospace environment, and on the ionosphere in particular, necessitates a careful consideration of the so-called eclipse geometry. A solar eclipse is a relatively rare astronomical phenomenon, which geometry is rather complex, specific for each event, and fast changing in time. The standard, most popular way to look at the eclipse geometry is via the two-dimensional representation (map) of the solar obscuration on the Earth's surface, in which the path of eclipse totality is drawn together with isolines of the gradually-decreasing eclipse magnitude farther away from this path. Such "surface maps" are widely used to readily explain some of the solar eclipse effects including, for example, the well-known decrease in total ionisation (due to the substantial decrease in solar irradiation), usually presented by the popular and easy to understand ionospheric characteristic of Total Electron Content (TEC). However, many other effects, especially those taking place at higher altitudes, cannot be explained in this fashion. Instead, a complete, four-dimensional (4D) description of the umbra (and penumbra), would be required. This presentation will address the issue of eclipse geometry effects on various ionospheric observations carried out during the total solar eclipse of August 21, 2017. In particular, GPS-based TEC and ionosonde measurements will be analysed and the eclipse effects on the ionosphere will be interpreted with respect to the actual eclipse geometry at ionospheric heights. Whenever possible, a comparison will be made with results from previous events, such as the ones from March 20, 2015 and October 3, 2005.
NASA Astrophysics Data System (ADS)
Sublet, Jean-Christophe
2008-02-01
ENDF/B-VII.0, the first release of the ENDF/B-VII nuclear data library, was formally released in December 2006. Prior to this event the European JEFF-3.1 nuclear data library was distributed in April 2005, while the Japanese JENDL-3.3 library has been available since 2002. The recent releases of these neutron transport libraries and special purpose files, the updates of the processing tools and the significant progress in computer power and potency, allow today far better leaner Monte Carlo code and pointwise library integration leading to enhanced benchmarking studies. A TRIPOLI-4.4 critical assembly suite has been set up as a collection of 86 benchmarks taken principally from the International Handbook of Evaluated Criticality Benchmarks Experiments (2006 Edition). It contains cases for a variety of U and Pu fuels and systems, ranging from fast to deep thermal solutions and assemblies. It covers cases with a variety of moderators, reflectors, absorbers, spectra and geometries. The results presented show that while the most recent library ENDF/B-VII.0, which benefited from the timely development of JENDL-3.3 and JEFF-3.1, produces better overall results, it suggest clearly also that improvements are still needed. This is true in particular in Light Water Reactor applications for thermal and epithermal plutonium data for all libraries and fast uranium data for JEFF-3.1 and JENDL-3.3. It is also true to state that other domains, in which Monte Carlo code are been used, such as astrophysics, fusion, high-energy or medical, radiation transport in general benefit notably from such enhanced libraries. It is particularly noticeable in term of the number of isotopes, materials available, the overall quality of the data and the much broader energy range for which evaluated (as opposed to modeled) data are available, spanning from meV to hundreds of MeV. In pointing out the impact of the different nuclear data at the library but also the isotopic levels one could not help noticing the importance and difference of the compensating effects that result from their single usage. Library differences are still important but tend to diminish due to the ever increasing and beneficial worldwide collaboration in the field of nuclear data measurement and evaluations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
Test suite for image-based motion estimation of the brain and tongue
NASA Astrophysics Data System (ADS)
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-03-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield "ghost" shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation.
Test Suite for Image-Based Motion Estimation of the Brain and Tongue
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-01-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield “ghost” shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation. PMID:28781414
Attacks, applications, and evaluation of known watermarking algorithms with Checkmark
NASA Astrophysics Data System (ADS)
Meerwald, Peter; Pereira, Shelby
2002-04-01
The Checkmark benchmarking tool was introduced to provide a framework for application-oriented evaluation of watermarking schemes. In this article we introduce new attacks and applications into the existing Checkmark framework. In addition to describing new attacks and applications, we also compare the performance of some well-known watermarking algorithms (proposed by Bruyndonckx,Cox, Fridrich, Dugad, Kim, Wang, Xia, Xie, Zhu and Pereira) with respect to the Checkmark benchmark. In particular, we consider the non-geometric application which contains tests that do not change the geometry of image. This attack constraint is artificial, but yet important for research purposes since a number of algorithms may be interesting, but would score poorly with respect to specific applications simply because geometric compensation has not been incorporated. We note, however, that with the help of image registration, even research algorithms that do not have counter-measures against geometric distortion -- such as a template or reference watermark -- can be evaluated. In the first version of the Checkmark benchmarking program, application-oriented evaluation was introduced, along with many new attacks not already considered in the literature. A second goal of this paper is to introduce new attacks and new applications into the Checkmark framework. In particular, we introduce the following new applications: video frame watermarking, medical imaging and watermarking of logos. Video frame watermarking includes low compression attacks and distortions which warp the edges of the video as well as general projective transformations which may result from someone filming the screen at a cinema. With respect to medical imaging, only small distortions are considered and furthermore it is essential that no distortions are present at embedding. Finally for logos, we consider images of small sizes and particularly compression, scaling, aspect ratio and other small distortions. The challenge of watermarking logos is essentially that of watermarking a small and typically simple image. With respect to new attacks, we consider: subsampling followed by interpolation, dithering and thresholding which both yield a binary image.
PeakRanger: A cloud-enabled peak caller for ChIP-seq data
2011-01-01
Background Chromatin immunoprecipitation (ChIP), coupled with massively parallel short-read sequencing (seq) is used to probe chromatin dynamics. Although there are many algorithms to call peaks from ChIP-seq datasets, most are tuned either to handle punctate sites, such as transcriptional factor binding sites, or broad regions, such as histone modification marks; few can do both. Other algorithms are limited in their configurability, performance on large data sets, and ability to distinguish closely-spaced peaks. Results In this paper, we introduce PeakRanger, a peak caller software package that works equally well on punctate and broad sites, can resolve closely-spaced peaks, has excellent performance, and is easily customized. In addition, PeakRanger can be run in a parallel cloud computing environment to obtain extremely high performance on very large data sets. We present a series of benchmarks to evaluate PeakRanger against 10 other peak callers, and demonstrate the performance of PeakRanger on both real and synthetic data sets. We also present real world usages of PeakRanger, including peak-calling in the modENCODE project. Conclusions Compared to other peak callers tested, PeakRanger offers improved resolution in distinguishing extremely closely-spaced peaks. PeakRanger has above-average spatial accuracy in terms of identifying the precise location of binding events. PeakRanger also has excellent sensitivity and specificity in all benchmarks evaluated. In addition, PeakRanger offers significant improvements in run time when running on a single processor system, and very marked improvements when allowed to take advantage of the MapReduce parallel environment offered by a cloud computing resource. PeakRanger can be downloaded at the official site of modENCODE project: http://www.modencode.org/software/ranger/ PMID:21554709
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
NASA Astrophysics Data System (ADS)
Matsuura, H.; Nagasaka, Y.
2018-02-01
We describe an instrument for the measurement of the Soret and thermodiffusion coefficients in ternary systems based on the transient holographic grating technique, which is called Soret forced Rayleigh scattering (SFRS) or thermal diffusion forced Rayleigh scattering (TDFRS). We integrated the SFRS technique and the two-wavelength detection technique, which enabled us to obtain two different signals to determine the two independent Soret coefficients and thermodiffusion coefficients in ternary systems. The instrument has been designed to read the mass transport simultaneously by two-wavelength lasers with wavelengths of λ = 403 nm and λ = 639 nm. The irradiation time of the probing lasers is controlled to reduce the effect of laser absorption to the sample with dye (quinizarin), which is added to convert the interference pattern of the heating laser of λ = 532 nm to the temperature grating. The result of the measurement of binary benchmark mixtures composed of 1,2,3,4-tetrahydronaphthalene (THN), isobutylbenzene (IBB), and n-dodecane (nC12) shows that the simultaneous two-wavelength observation of the Soret effect and the mass diffusion are adequately performed. To evaluate performance in the measurement of ternary systems, we carried out experiments on the ternary benchmark mixtures of THN/IBB/nC12 with the mass fractions of 0.800/0.100/0.100 at a temperature of 298.2 K. The Soret coefficient and thermodiffusion coefficient agreed with the ternary benchmark values within the range of the standard uncertainties (23% for the Soret coefficient of THN and 30% for the thermodiffusion coefficient of THN).
The finite cell method for polygonal meshes: poly-FCM
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2016-10-01
In the current article, we extend the two-dimensional version of the finite cell method (FCM), which has so far only been used for structured quadrilateral meshes, to unstructured polygonal discretizations. Therefore, the adaptive quadtree-based numerical integration technique is reformulated and the notion of generalized barycentric coordinates is introduced. We show that the resulting polygonal (poly-)FCM approach retains the optimal rates of convergence if and only if the geometry of the structure is adequately resolved. The main advantage of the proposed method is that it inherits the ability of polygonal finite elements for local mesh refinement and for the construction of transition elements (e.g. conforming quadtree meshes without hanging nodes). These properties along with the performance of the poly-FCM are illustrated by means of several benchmark problems for both static and dynamic cases.
Multidimensional Multiphysics Simulation of TRISO Particle Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Hales; R. L. Williamson; S. R. Novascone
2013-11-01
Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less
Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations
NASA Astrophysics Data System (ADS)
Tritsis, A.; Yorke, H.; Tassis, K.
2018-05-01
We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.
Separation of Evans and Hiro currents in VDE of tokamak plasma
NASA Astrophysics Data System (ADS)
Galkin, Sergei A.; Svidzinski, V. A.; Zakharov, L. E.
2014-10-01
Progress on the Disruption Simulation Code (DSC-3D) development and benchmarking will be presented. The DSC-3D is one-fluid nonlinear time-dependent MHD code, which utilizes fully 3D toroidal geometry for the first wall, pure vacuum and plasma itself, with adaptation to the moving plasma boundary and accurate resolution of the plasma surface current. Suppression of fast magnetosonic scale by the plasma inertia neglecting will be demonstrated. Due to code adaptive nature, self-consistent plasma surface current modeling during non-linear dynamics of the Vertical Displacement Event (VDE) is accurately provided. Separation of the plasma surface current on Evans and Hiro currents during simulation of fully developed VDE, then the plasma touches in-vessel tiles, will be discussed. Work is supported by the US DOE SBIR Grant # DE-SC0004487.
Lattice Boltzmann model for three-phase viscoelastic fluid flow
NASA Astrophysics Data System (ADS)
Xie, Chiyu; Lei, Wenhai; Wang, Moran
2018-02-01
A lattice Boltzmann (LB) framework is developed for simulation of three-phase viscoelastic fluid flows in complex geometries. This model is based on a Rothman-Keller type model for immiscible multiphase flows which ensures mass conservation of each component in porous media even for a high density ratio. To account for the viscoelastic effects, the Maxwell constitutive relation is correctly introduced into the momentum equation, which leads to a modified lattice Boltzmann evolution equation for Maxwell fluids by removing the normal but excess viscous term. Our simulation tests indicate that this excess viscous term may induce significant errors. After three benchmark cases, the displacement processes of oil by dispersed polymer are studied as a typical example of three-phase viscoelastic fluid flow. The results show that increasing either the polymer intrinsic viscosity or the elastic modulus will enhance the oil recovery.
Modeling of the Edwards pipe experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiselj, I.; Petelin, S.
1995-12-31
The Edwards pipe experiment is used as one of the basic benchmarks for the two-phase flow codes due to its simple geometry and the wide range of phenomena that it covers. Edwards and O`Brien filled 4-m-long pipe with liquid water at 7 MPa and 502 K and ruptured one end of the tube. They measured pressure and void fraction during the blowdown. Important phenomena observed were pressure rarefaction wave, flashing onset, critical two-phase flow, and void fraction wave. Experimental data were used to analyze the capabilities of the RELAP5/MOD3.1 six-equation two-phase flow model and to examine two different numerical schemes:more » one from the RELAP5/MOD3.1 code and one from our own code, which was based on characteristic upwind discretization.« less
Dynamic Tasking of Networked Sensors Using Covariance Information
2010-09-01
has been created under an effort called TASMAN (Tasking Autonomous Sensors in a Multiple Application Network). One of the first studies utilizing this...environment was focused on a novel resource management approach, namely covariance-based tasking. Under this scheme, the state error covariance of...resident space objects (RSO), sensor characteristics, and sensor- target geometry were used to determine the effectiveness of future observations in
3D engineered fiberboard : finite element analysis of a new building product
John F. Hunt
2004-01-01
This paper presents finite element analyses that are being used to analyze and estimate the structural performance of a new product called 3D engineered fiberboard in bending and flat-wise compression applications. A 3x3x2 split-plot experimental design was used to vary geometry configurations to determine their effect on performance properties. The models are based on...
In Search of More Triangle Centres. a Source of Classroom Projects in Euclidean Geometry
ERIC Educational Resources Information Center
Abu-Saymeh, S.; Hajja, M.
2005-01-01
A point "E" inside a triangle "ABC" can be coordinatized by the areas of the triangles "EBC," "ECA," and "EAB." These are called the barycentric coordinates of "E." It can also be coordinatized using the six segments into which the cevians through "E" divide the sides of "ABC," or the six angles into which the cevians through "E" divide the angles…
Realism and Perspectivism: a Reevaluation of Rival Theories of Spatial Vision.
NASA Astrophysics Data System (ADS)
Thro, E. Broydrick
1990-01-01
My study reevaluates two theories of human space perception, a trigonometric surveying theory I call perspectivism and a "scene recognition" theory I call realism. Realists believe that retinal image geometry can supply no unambiguous information about an object's size and distance--and that, as a result, viewers can locate objects in space only by making discretionary interpretations based on familiar experience of object types. Perspectivists, in contrast, think viewers can disambiguate object sizes/distances on the basis of retinal image information alone. More specifically, they believe the eye responds to perspective image geometry with an automatic trigonometric calculation that not only fixes the directions and shapes, but also roughly fixes the sizes and distances of scene elements in space. Today this surveyor theory has been largely superceded by the realist approach, because most vision scientists believe retinal image geometry is ambiguous about the scale of space. However, I show that there is a considerable body of neglected evidence, both past and present, tending to call this scale ambiguity claim into question. I maintain that this evidence against scale ambiguity could hardly be more important, if one considers its subversive implications for the scene recognition theory that is not only today's reigning approach to spatial vision, but also the foundation for computer scientists' efforts to create space-perceiving robots. If viewers were deemed to be capable of automatic surveying calculations, the discretionary scene recognition theory would lose its main justification. Clearly, it would be difficult for realists to maintain that we viewers rely on scene recognition for space perception in spite of our ability to survey. And in reality, as I show, the surveyor theory does a much better job of describing the everyday space we viewers actually see--a space featuring stable, unambiguous relationships among scene elements, and a single horizon and vanishing point for (meter-scale) receding objects. In addition, I argue, the surveyor theory raises fewer philosophical difficulties, because it is more in harmony with our everyday concepts of material objects, human agency and the self.
NASA Astrophysics Data System (ADS)
Villanueva Perez, Carlos Hernan
Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.
Fictitious Domain Methods for Fracture Models in Elasticity.
NASA Astrophysics Data System (ADS)
Court, S.; Bodart, O.; Cayol, V.; Koko, J.
2014-12-01
As surface displacements depend non linearly on sources location and shape, simplifying assumptions are generally required to reduce computation time when inverting geodetic data. We present a generic Finite Element Method designed for pressurized or sheared cracks inside a linear elastic medium. A fictitious domain method is used to take the crack into account independently of the mesh. Besides the possibility of considering heterogeneous media, the approach permits the evolution of the crack through time or more generally through iterations: The goal is to change the less things we need when the crack geometry is modified; In particular no re-meshing is required (the boundary conditions at the level of the crack are imposed by Lagrange multipliers), leading to a gain of computation time and resources with respect to classic finite element methods. This method is also robust with respect to the geometry, since we expect to observe the same behavior whatever the shape and the position of the crack. We present numerical experiments which highlight the accuracy of our method (using convergence curves), the optimality of errors, and the robustness with respect to the geometry (with computation of errors on some quantities for all kind of geometric configurations). We will also provide 2D benchmark tests. The method is then applied to Piton de la Fournaise volcano, considering a pressurized crack - inside a 3-dimensional domain - and the corresponding computation time and accuracy are compared with results from a mixed Boundary element method. In order to determine the crack geometrical characteristics, and pressure, inversions are performed combining fictitious domain computations with a near neighborhood algorithm. Performances are compared with those obtained combining a mixed boundary element method with the same inversion algorithm.
NASA Astrophysics Data System (ADS)
Thivillon, L.; Bertrand, Ph.; Laget, B.; Smurov, I.
2009-03-01
Direct metal deposition (DMD) is an automated 3D deposition process arising from laser cladding technology with co-axial powder injection to refine or refurbish parts. Recently DMD has been extended to manufacture large-size near-net-shape components. When applied for manufacturing new parts (or their refinement), DMD can provide tailored thermal properties, high corrosion resistance, tailored tribology, multifunctional performance and cost savings due to smart material combinations. In repair (refurbishment) operations, DMD can be applied for parts with a wide variety of geometries and sizes. In contrast to the current tool repair techniques such as tungsten inert gas (TIG), metal inert gas (MIG) and plasma welding, laser cladding technology by DMD offers a well-controlled heat-treated zone due to the high energy density of the laser beam. In addition, this technology may be used for preventative maintenance and design changes/up-grading. One of the advantages of DMD is the possibility to build functionally graded coatings (from 1 mm thickness and higher) and 3D multi-material objects (for example, 100 mm-sized monolithic rectangular) in a single-step manufacturing cycle by using up to 4-channel powder feeder. Approved materials are: Fe (including stainless steel), Ni and Co alloys, (Cu,Ni 10%), WC compounds, TiC compounds. The developed coatings/parts are characterized by low porosity (<1%), fine microstructure, and their microhardness is close to the benchmark value of wrought alloys after thermal treatment (Co-based alloy Stellite, Inox 316L, stainless steel 17-4PH). The intended applications concern cooling elements with complex geometry, friction joints under high temperature and load, light-weight mechanical support structures, hermetic joints, tubes with complex geometry, and tailored inside and outside surface properties, etc.
Fiberprint: A subject fingerprint based on sparse code pooling for white matter fiber analysis.
Kumar, Kuldeep; Desrosiers, Christian; Siddiqi, Kaleem; Colliot, Olivier; Toews, Matthew
2017-09-01
White matter characterization studies use the information provided by diffusion magnetic resonance imaging (dMRI) to draw cross-population inferences. However, the structure, function, and white matter geometry vary across individuals. Here, we propose a subject fingerprint, called Fiberprint, to quantify the individual uniqueness in white matter geometry using fiber trajectories. We learn a sparse coding representation for fiber trajectories by mapping them to a common space defined by a dictionary. A subject fingerprint is then generated by applying a pooling function for each bundle, thus providing a vector of bundle-wise features describing a particular subject's white matter geometry. These features encode unique properties of fiber trajectories, such as their density along prominent bundles. An analysis of data from 861 Human Connectome Project subjects reveals that a fingerprint based on approximately 3000 fiber trajectories can uniquely identify exemplars from the same individual. We also use fingerprints for twin/sibling identification, our observations consistent with the twin data studies of white matter integrity. Our results demonstrate that the proposed Fiberprint can effectively capture the variability in white matter fiber geometry across individuals, using a compact feature vector (dimension of 50), making this framework particularly attractive for handling large datasets. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eo, Y. S.; Sun, K.; Kurdak, ć.; Kim, D.-J.; Fisk, Z.
2018-04-01
We introduce a resistance measurement method that is useful in characterizing materials with both surface and bulk conduction, such as three-dimensional topological insulators. The transport geometry for this resistance measurement configuration consists of one current lead as a closed loop that fully encloses the other current lead on the surface, and two voltage leads that are both placed outside the loop. We show that, in the limit where the transport is dominated by the surface conductivity of the material, the four-terminal resistance measured from such a transport geometry is proportional to σb/σs2, where σb and σs are the bulk and surface conductivities of the material, respectively. We call this type of measurement inverted resistance measurement, as the resistance scales inversely with the bulk resistivity. We discuss possible implementations of this method by performing numerical calculations on different geometries and introduce strategies to extract the bulk and surface conductivities. We also demonstrate inverted resistance measurements on SmB6 , a topological Kondo insulator, using both single-sided and coaxially aligned double-sided Corbino disk transport geometries. Using this method, we are able to measure the bulk conductivity, even at low temperatures, where the bulk conduction is much smaller than the surface conduction in this material.
3D Reconstruction and Approximation of Vegetation Geometry for Modeling of Within-canopy Flows
NASA Astrophysics Data System (ADS)
Henderson, S. M.; Lynn, K.; Lienard, J.; Strigul, N.; Mullarney, J. C.; Norris, B. K.; Bryan, K. R.
2016-02-01
Aquatic vegetation can shelter coastlines from waves and currents, sometimes resulting in accretion of fine sediments. We developed a photogrammetric technique for estimating the key geometric vegetation parameters that are required for modeling of within-canopy flows. Accurate estimates of vegetation geometry and density are essential to refine hydrodynamic models, but accurate, convenient, and time-efficient methodologies for measuring complex canopy geometries have been lacking. The novel approach presented here builds on recent progress in photogrammetry and computer vision. We analyzed the geometry of aerial mangrove roots, called pneumatophores, in Vietnam's Mekong River Delta. Although comparatively thin, pneumatophores are more numerous than mangrove trunks, and thus influence near bed flow and sediment transport. Quadrats (1 m2) were placed at low tide among pneumatophores. Roots were counted and measured for height and diameter. Photos were taken from multiple angles around each quadrat. Relative camera locations and orientations were estimated from key features identified in multiple images using open-source software (VisualSfM). Next, a dense 3D point cloud was produced. Finally, algorithms were developed for automated estimation of pneumatophore geometry from the 3D point cloud. We found good agreement between hand-measured and photogrammetric estimates of key geometric parameters, including mean stem diameter, total number of stems, and frontal area density. These methods can reduce time spent measuring in the field, thereby enabling future studies to refine models of water flows and sediment transport within heterogenous vegetation canopies.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform
NASA Technical Reports Server (NTRS)
Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak
2012-01-01
The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
Modified-BRISQUE as no reference image quality assessment for structural MR images.
Chow, Li Sze; Rajagopal, Heshalini
2017-11-01
An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.